00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3903 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3498 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.059 The recommended git tool is: git 00:00:00.059 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.098 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.149 Using shallow fetch with depth 1 00:00:00.149 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.149 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.319 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.330 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.341 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:06.342 > git config core.sparsecheckout # timeout=10 00:00:06.354 > git read-tree -mu HEAD # timeout=10 00:00:06.368 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:06.386 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:06.386 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:06.485 [Pipeline] Start of Pipeline 00:00:06.498 [Pipeline] library 00:00:06.499 Loading library shm_lib@master 00:00:06.499 Library shm_lib@master is cached. Copying from home. 00:00:06.515 [Pipeline] node 00:00:06.522 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.523 [Pipeline] { 00:00:06.532 [Pipeline] catchError 00:00:06.533 [Pipeline] { 00:00:06.543 [Pipeline] wrap 00:00:06.548 [Pipeline] { 00:00:06.554 [Pipeline] stage 00:00:06.555 [Pipeline] { (Prologue) 00:00:06.726 [Pipeline] sh 00:00:07.011 + logger -p user.info -t JENKINS-CI 00:00:07.029 [Pipeline] echo 00:00:07.031 Node: CYP12 00:00:07.038 [Pipeline] sh 00:00:07.342 [Pipeline] setCustomBuildProperty 00:00:07.353 [Pipeline] echo 00:00:07.355 Cleanup processes 00:00:07.360 [Pipeline] sh 00:00:07.650 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.650 2772107 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.665 [Pipeline] sh 00:00:07.954 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.954 ++ grep -v 'sudo pgrep' 00:00:07.954 ++ awk '{print $1}' 00:00:07.954 + sudo kill -9 00:00:07.954 + true 00:00:07.968 [Pipeline] cleanWs 00:00:07.977 [WS-CLEANUP] Deleting project workspace... 00:00:07.977 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.984 [WS-CLEANUP] done 00:00:07.987 [Pipeline] setCustomBuildProperty 00:00:07.996 [Pipeline] sh 00:00:08.282 + sudo git config --global --replace-all safe.directory '*' 00:00:08.370 [Pipeline] httpRequest 00:00:08.760 [Pipeline] echo 00:00:08.762 Sorcerer 10.211.164.101 is alive 00:00:08.771 [Pipeline] retry 00:00:08.772 [Pipeline] { 00:00:08.786 [Pipeline] httpRequest 00:00:08.791 HttpMethod: GET 00:00:08.791 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.792 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.795 Response Code: HTTP/1.1 200 OK 00:00:08.795 Success: Status code 200 is in the accepted range: 200,404 00:00:08.796 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.709 [Pipeline] } 00:00:09.763 [Pipeline] // retry 00:00:09.768 [Pipeline] sh 00:00:10.053 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.069 [Pipeline] httpRequest 00:00:10.425 [Pipeline] echo 00:00:10.426 Sorcerer 10.211.164.101 is alive 00:00:10.434 [Pipeline] retry 00:00:10.436 [Pipeline] { 00:00:10.451 [Pipeline] httpRequest 00:00:10.456 HttpMethod: GET 00:00:10.456 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:10.457 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:10.487 Response Code: HTTP/1.1 200 OK 00:00:10.487 Success: Status code 200 is in the accepted range: 200,404 00:00:10.488 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:51.530 [Pipeline] } 00:00:51.546 [Pipeline] // retry 00:00:51.553 [Pipeline] sh 00:00:51.842 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:54.402 [Pipeline] sh 00:00:54.695 + git -C spdk log --oneline -n5 00:00:54.695 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:54.695 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:54.695 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:54.695 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:54.695 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:54.717 [Pipeline] withCredentials 00:00:54.730 > git --version # timeout=10 00:00:54.745 > git --version # 'git version 2.39.2' 00:00:54.769 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:54.772 [Pipeline] { 00:00:54.781 [Pipeline] retry 00:00:54.783 [Pipeline] { 00:00:54.802 [Pipeline] sh 00:00:55.095 + git ls-remote http://dpdk.org/git/dpdk main 00:00:55.108 [Pipeline] } 00:00:55.121 [Pipeline] // retry 00:00:55.125 [Pipeline] } 00:00:55.138 [Pipeline] // withCredentials 00:00:55.145 [Pipeline] httpRequest 00:00:55.553 [Pipeline] echo 00:00:55.555 Sorcerer 10.211.164.101 is alive 00:00:55.566 [Pipeline] retry 00:00:55.568 [Pipeline] { 00:00:55.583 [Pipeline] httpRequest 00:00:55.588 HttpMethod: GET 00:00:55.588 URL: http://10.211.164.101/packages/dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:00:55.589 Sending request to url: http://10.211.164.101/packages/dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:00:55.597 Response Code: HTTP/1.1 200 OK 00:00:55.598 Success: Status code 200 is in the accepted range: 200,404 00:00:55.598 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:01:20.978 [Pipeline] } 00:01:20.996 [Pipeline] // retry 00:01:21.003 [Pipeline] sh 00:01:21.293 + tar --no-same-owner -xf dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:01:23.266 [Pipeline] sh 00:01:23.555 + git -C dpdk log --oneline -n5 00:01:23.555 bf0ff8df59 maintainers: fix prog guide paths 00:01:23.555 41dd9a6bc2 doc: reorganize prog guide 00:01:23.555 cb9187bc5c version: 24.11-rc0 00:01:23.555 b3485f4293 version: 24.07.0 00:01:23.555 fa58aec335 doc: add tested platforms with NVIDIA NICs 00:01:23.566 [Pipeline] } 00:01:23.581 [Pipeline] // stage 00:01:23.591 [Pipeline] stage 00:01:23.593 [Pipeline] { (Prepare) 00:01:23.611 [Pipeline] writeFile 00:01:23.627 [Pipeline] sh 00:01:23.917 + logger -p user.info -t JENKINS-CI 00:01:23.931 [Pipeline] sh 00:01:24.220 + logger -p user.info -t JENKINS-CI 00:01:24.234 [Pipeline] sh 00:01:24.524 + cat autorun-spdk.conf 00:01:24.524 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.524 SPDK_TEST_NVMF=1 00:01:24.524 SPDK_TEST_NVME_CLI=1 00:01:24.524 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.524 SPDK_TEST_NVMF_NICS=e810 00:01:24.524 SPDK_TEST_VFIOUSER=1 00:01:24.524 SPDK_RUN_UBSAN=1 00:01:24.524 NET_TYPE=phy 00:01:24.524 SPDK_TEST_NATIVE_DPDK=main 00:01:24.524 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.532 RUN_NIGHTLY=1 00:01:24.537 [Pipeline] readFile 00:01:24.563 [Pipeline] withEnv 00:01:24.565 [Pipeline] { 00:01:24.580 [Pipeline] sh 00:01:24.875 + set -ex 00:01:24.875 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:24.875 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.875 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.875 ++ SPDK_TEST_NVMF=1 00:01:24.875 ++ SPDK_TEST_NVME_CLI=1 00:01:24.875 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.875 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.875 ++ SPDK_TEST_VFIOUSER=1 00:01:24.875 ++ SPDK_RUN_UBSAN=1 00:01:24.875 ++ NET_TYPE=phy 00:01:24.875 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:24.875 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.875 ++ RUN_NIGHTLY=1 00:01:24.875 + case $SPDK_TEST_NVMF_NICS in 00:01:24.875 + DRIVERS=ice 00:01:24.876 + [[ tcp == \r\d\m\a ]] 00:01:24.876 + [[ -n ice ]] 00:01:24.876 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.876 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.876 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:24.876 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.876 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.876 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.876 + true 00:01:24.876 + for D in $DRIVERS 00:01:24.876 + sudo modprobe ice 00:01:24.876 + exit 0 00:01:24.887 [Pipeline] } 00:01:24.904 [Pipeline] // withEnv 00:01:24.909 [Pipeline] } 00:01:24.924 [Pipeline] // stage 00:01:24.933 [Pipeline] catchError 00:01:24.935 [Pipeline] { 00:01:24.949 [Pipeline] timeout 00:01:24.949 Timeout set to expire in 1 hr 0 min 00:01:24.951 [Pipeline] { 00:01:24.966 [Pipeline] stage 00:01:24.968 [Pipeline] { (Tests) 00:01:24.984 [Pipeline] sh 00:01:25.274 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.274 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.274 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.274 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:25.274 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.274 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:25.274 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:25.274 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:25.274 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:25.274 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:25.274 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:25.274 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.274 + source /etc/os-release 00:01:25.274 ++ NAME='Fedora Linux' 00:01:25.274 ++ VERSION='39 (Cloud Edition)' 00:01:25.274 ++ ID=fedora 00:01:25.274 ++ VERSION_ID=39 00:01:25.274 ++ VERSION_CODENAME= 00:01:25.274 ++ PLATFORM_ID=platform:f39 00:01:25.274 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.274 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.274 ++ LOGO=fedora-logo-icon 00:01:25.274 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.274 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.274 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.274 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.274 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.274 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.274 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.274 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.274 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.274 ++ SUPPORT_END=2024-11-12 00:01:25.274 ++ VARIANT='Cloud Edition' 00:01:25.274 ++ VARIANT_ID=cloud 00:01:25.274 + uname -a 00:01:25.274 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.274 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.583 Hugepages 00:01:28.583 node hugesize free / total 00:01:28.583 node0 1048576kB 0 / 0 00:01:28.583 node0 2048kB 0 / 0 00:01:28.583 node1 1048576kB 0 / 0 00:01:28.583 node1 2048kB 0 / 0 00:01:28.583 00:01:28.583 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.583 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:28.583 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:28.583 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:28.583 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:28.583 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:28.583 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:28.583 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:28.583 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:28.583 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:28.583 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:28.583 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:28.583 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:28.583 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:28.583 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:28.583 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:28.583 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:28.583 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:28.583 + rm -f /tmp/spdk-ld-path 00:01:28.583 + source autorun-spdk.conf 00:01:28.583 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.583 ++ SPDK_TEST_NVMF=1 00:01:28.583 ++ SPDK_TEST_NVME_CLI=1 00:01:28.583 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.583 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.583 ++ SPDK_TEST_VFIOUSER=1 00:01:28.583 ++ SPDK_RUN_UBSAN=1 00:01:28.583 ++ NET_TYPE=phy 00:01:28.583 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:28.583 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.583 ++ RUN_NIGHTLY=1 00:01:28.583 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.583 + [[ -n '' ]] 00:01:28.583 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.583 + for M in /var/spdk/build-*-manifest.txt 00:01:28.583 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.583 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.583 + for M in /var/spdk/build-*-manifest.txt 00:01:28.583 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.583 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.583 + for M in /var/spdk/build-*-manifest.txt 00:01:28.583 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.583 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.583 ++ uname 00:01:28.583 + [[ Linux == \L\i\n\u\x ]] 00:01:28.583 + sudo dmesg -T 00:01:28.583 + sudo dmesg --clear 00:01:28.583 + dmesg_pid=2773711 00:01:28.583 + [[ Fedora Linux == FreeBSD ]] 00:01:28.583 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.583 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.583 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.583 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.583 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.583 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.583 + sudo dmesg -Tw 00:01:28.584 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.584 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.584 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.584 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.584 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.584 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.584 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.584 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.584 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.584 Test configuration: 00:01:28.846 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.846 SPDK_TEST_NVMF=1 00:01:28.846 SPDK_TEST_NVME_CLI=1 00:01:28.846 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.846 SPDK_TEST_NVMF_NICS=e810 00:01:28.846 SPDK_TEST_VFIOUSER=1 00:01:28.846 SPDK_RUN_UBSAN=1 00:01:28.846 NET_TYPE=phy 00:01:28.846 SPDK_TEST_NATIVE_DPDK=main 00:01:28.846 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.846 RUN_NIGHTLY=1 15:19:08 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:28.846 15:19:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.846 15:19:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.846 15:19:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.846 15:19:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.846 15:19:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.846 15:19:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.846 15:19:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.846 15:19:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.846 15:19:08 -- paths/export.sh@5 -- $ export PATH 00:01:28.846 15:19:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.846 15:19:08 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.846 15:19:08 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:28.846 15:19:08 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727788748.XXXXXX 00:01:28.846 15:19:08 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727788748.7tQK00 00:01:28.846 15:19:08 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:28.846 15:19:08 -- common/autobuild_common.sh@485 -- $ '[' -n main ']' 00:01:28.846 15:19:08 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.846 15:19:08 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:28.846 15:19:08 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.846 15:19:08 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.846 15:19:08 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:28.846 15:19:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:28.846 15:19:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.846 15:19:08 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:28.846 15:19:08 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:28.846 15:19:08 -- pm/common@17 -- $ local monitor 00:01:28.846 15:19:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.846 15:19:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.846 15:19:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.846 15:19:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.846 15:19:08 -- pm/common@21 -- $ date +%s 00:01:28.846 15:19:08 -- pm/common@21 -- $ date +%s 00:01:28.846 15:19:08 -- pm/common@25 -- $ sleep 1 00:01:28.846 15:19:08 -- pm/common@21 -- $ date +%s 00:01:28.846 15:19:08 -- pm/common@21 -- $ date +%s 00:01:28.846 15:19:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727788748 00:01:28.846 15:19:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727788748 00:01:28.846 15:19:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727788748 00:01:28.846 15:19:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727788748 00:01:28.846 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727788748_collect-vmstat.pm.log 00:01:28.846 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727788748_collect-cpu-load.pm.log 00:01:28.846 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727788748_collect-cpu-temp.pm.log 00:01:28.846 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727788748_collect-bmc-pm.bmc.pm.log 00:01:29.791 15:19:09 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:29.791 15:19:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.791 15:19:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.791 15:19:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.791 15:19:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.791 Tue Oct 1 01:19:09 PM UTC 2024 00:01:29.791 15:19:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.791 v25.01-pre-17-g09cc66129 00:01:29.791 15:19:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.791 15:19:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.791 15:19:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.791 15:19:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:29.791 15:19:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:29.791 15:19:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.791 ************************************ 00:01:29.791 START TEST ubsan 00:01:29.791 ************************************ 00:01:29.791 15:19:09 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:29.791 using ubsan 00:01:29.791 00:01:29.791 real 0m0.001s 00:01:29.791 user 0m0.000s 00:01:29.791 sys 0m0.000s 00:01:29.791 15:19:09 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:29.791 15:19:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.791 ************************************ 00:01:29.791 END TEST ubsan 00:01:29.791 ************************************ 00:01:30.053 15:19:09 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:30.053 15:19:09 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:30.053 15:19:09 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:30.053 15:19:09 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:30.053 15:19:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:30.053 15:19:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.053 ************************************ 00:01:30.053 START TEST build_native_dpdk 00:01:30.053 ************************************ 00:01:30.053 15:19:09 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:30.053 bf0ff8df59 maintainers: fix prog guide paths 00:01:30.053 41dd9a6bc2 doc: reorganize prog guide 00:01:30.053 cb9187bc5c version: 24.11-rc0 00:01:30.053 b3485f4293 version: 24.07.0 00:01:30.053 fa58aec335 doc: add tested platforms with NVIDIA NICs 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:30.053 15:19:09 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:30.053 15:19:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:30.054 patching file config/rte_config.h 00:01:30.054 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.054 15:19:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:30.054 patching file drivers/bus/pci/linux/pci_uio.c 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:30.054 15:19:09 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:35.346 The Meson build system 00:01:35.346 Version: 1.5.0 00:01:35.346 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:35.346 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:35.346 Build type: native build 00:01:35.346 Program cat found: YES (/usr/bin/cat) 00:01:35.346 Project name: DPDK 00:01:35.346 Project version: 24.11.0-rc0 00:01:35.346 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:35.346 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:35.346 Host machine cpu family: x86_64 00:01:35.346 Host machine cpu: x86_64 00:01:35.346 Message: ## Building in Developer Mode ## 00:01:35.346 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.346 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:35.346 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.346 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:35.346 Program cat found: YES (/usr/bin/cat) 00:01:35.346 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:35.346 Compiler for C supports arguments -march=native: YES 00:01:35.346 Checking for size of "void *" : 8 00:01:35.346 Checking for size of "void *" : 8 (cached) 00:01:35.346 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:35.346 Library m found: YES 00:01:35.346 Library numa found: YES 00:01:35.346 Has header "numaif.h" : YES 00:01:35.346 Library fdt found: NO 00:01:35.346 Library execinfo found: NO 00:01:35.346 Has header "execinfo.h" : YES 00:01:35.346 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:35.346 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.346 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.346 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.346 Run-time dependency openssl found: YES 3.1.1 00:01:35.346 Run-time dependency libpcap found: YES 1.10.4 00:01:35.346 Has header "pcap.h" with dependency libpcap: YES 00:01:35.346 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.346 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.346 Compiler for C supports arguments -Wformat: YES 00:01:35.346 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:35.346 Compiler for C supports arguments -Wformat-security: NO 00:01:35.346 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.346 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.346 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.346 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.346 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.346 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.346 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.346 Compiler for C supports arguments -Wundef: YES 00:01:35.346 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.346 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.346 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:35.346 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.346 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:35.346 Program objdump found: YES (/usr/bin/objdump) 00:01:35.346 Compiler for C supports arguments -mavx512f: YES 00:01:35.346 Checking if "AVX512 checking" compiles: YES 00:01:35.346 Fetching value of define "__SSE4_2__" : 1 00:01:35.346 Fetching value of define "__AES__" : 1 00:01:35.346 Fetching value of define "__AVX__" : 1 00:01:35.346 Fetching value of define "__AVX2__" : 1 00:01:35.346 Fetching value of define "__AVX512BW__" : 1 00:01:35.346 Fetching value of define "__AVX512CD__" : 1 00:01:35.346 Fetching value of define "__AVX512DQ__" : 1 00:01:35.346 Fetching value of define "__AVX512F__" : 1 00:01:35.346 Fetching value of define "__AVX512VL__" : 1 00:01:35.346 Fetching value of define "__PCLMUL__" : 1 00:01:35.346 Fetching value of define "__RDRND__" : 1 00:01:35.346 Fetching value of define "__RDSEED__" : 1 00:01:35.346 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:35.346 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:35.346 Message: lib/log: Defining dependency "log" 00:01:35.346 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.346 Message: lib/argparse: Defining dependency "argparse" 00:01:35.346 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.346 Checking for function "getentropy" : NO 00:01:35.346 Message: lib/eal: Defining dependency "eal" 00:01:35.346 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:35.346 Message: lib/ring: Defining dependency "ring" 00:01:35.346 Message: lib/rcu: Defining dependency "rcu" 00:01:35.346 Message: lib/mempool: Defining dependency "mempool" 00:01:35.346 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.346 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.346 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.346 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.346 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.346 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:35.346 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:35.346 Compiler for C supports arguments -mpclmul: YES 00:01:35.346 Compiler for C supports arguments -maes: YES 00:01:35.346 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.346 Compiler for C supports arguments -mavx512bw: YES 00:01:35.346 Compiler for C supports arguments -mavx512dq: YES 00:01:35.346 Compiler for C supports arguments -mavx512vl: YES 00:01:35.346 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.346 Compiler for C supports arguments -mavx2: YES 00:01:35.346 Compiler for C supports arguments -mavx: YES 00:01:35.347 Message: lib/net: Defining dependency "net" 00:01:35.347 Message: lib/meter: Defining dependency "meter" 00:01:35.347 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.347 Message: lib/pci: Defining dependency "pci" 00:01:35.347 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.347 Message: lib/metrics: Defining dependency "metrics" 00:01:35.347 Message: lib/hash: Defining dependency "hash" 00:01:35.347 Message: lib/timer: Defining dependency "timer" 00:01:35.347 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.347 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:35.347 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:35.347 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.347 Message: lib/acl: Defining dependency "acl" 00:01:35.347 Message: lib/bbdev: Defining dependency "bbdev" 00:01:35.347 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:35.347 Run-time dependency libelf found: YES 0.191 00:01:35.347 Message: lib/bpf: Defining dependency "bpf" 00:01:35.347 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:35.347 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.347 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.347 Message: lib/distributor: Defining dependency "distributor" 00:01:35.347 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.347 Message: lib/efd: Defining dependency "efd" 00:01:35.347 Message: lib/eventdev: Defining dependency "eventdev" 00:01:35.347 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:35.347 Message: lib/gpudev: Defining dependency "gpudev" 00:01:35.347 Message: lib/gro: Defining dependency "gro" 00:01:35.347 Message: lib/gso: Defining dependency "gso" 00:01:35.347 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:35.347 Message: lib/jobstats: Defining dependency "jobstats" 00:01:35.347 Message: lib/latencystats: Defining dependency "latencystats" 00:01:35.347 Message: lib/lpm: Defining dependency "lpm" 00:01:35.347 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.347 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.347 Fetching value of define "__AVX512IFMA__" : 1 00:01:35.347 Message: lib/member: Defining dependency "member" 00:01:35.347 Message: lib/pcapng: Defining dependency "pcapng" 00:01:35.347 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.347 Message: lib/power: Defining dependency "power" 00:01:35.347 Message: lib/rawdev: Defining dependency "rawdev" 00:01:35.347 Message: lib/regexdev: Defining dependency "regexdev" 00:01:35.347 Message: lib/mldev: Defining dependency "mldev" 00:01:35.347 Message: lib/rib: Defining dependency "rib" 00:01:35.347 Message: lib/reorder: Defining dependency "reorder" 00:01:35.347 Message: lib/sched: Defining dependency "sched" 00:01:35.347 Message: lib/security: Defining dependency "security" 00:01:35.347 Message: lib/stack: Defining dependency "stack" 00:01:35.347 Has header "linux/userfaultfd.h" : YES 00:01:35.347 Has header "linux/vduse.h" : YES 00:01:35.347 Message: lib/vhost: Defining dependency "vhost" 00:01:35.347 Message: lib/ipsec: Defining dependency "ipsec" 00:01:35.347 Message: lib/pdcp: Defining dependency "pdcp" 00:01:35.347 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.347 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.347 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.347 Message: lib/fib: Defining dependency "fib" 00:01:35.347 Message: lib/port: Defining dependency "port" 00:01:35.347 Message: lib/pdump: Defining dependency "pdump" 00:01:35.347 Message: lib/table: Defining dependency "table" 00:01:35.347 Message: lib/pipeline: Defining dependency "pipeline" 00:01:35.347 Message: lib/graph: Defining dependency "graph" 00:01:35.347 Message: lib/node: Defining dependency "node" 00:01:35.347 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:35.347 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.347 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.278 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.278 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:37.278 Compiler for C supports arguments -Wno-unused-value: YES 00:01:37.278 Compiler for C supports arguments -Wno-format: YES 00:01:37.278 Compiler for C supports arguments -Wno-format-security: YES 00:01:37.278 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:37.278 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:37.278 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:37.278 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:37.278 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:37.278 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:37.278 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.278 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:37.278 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:37.278 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:37.278 Has header "sys/epoll.h" : YES 00:01:37.278 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:37.278 Configuring doxy-api-html.conf using configuration 00:01:37.278 Configuring doxy-api-man.conf using configuration 00:01:37.278 Program mandb found: YES (/usr/bin/mandb) 00:01:37.278 Program sphinx-build found: NO 00:01:37.278 Configuring rte_build_config.h using configuration 00:01:37.278 Message: 00:01:37.278 ================= 00:01:37.278 Applications Enabled 00:01:37.278 ================= 00:01:37.278 00:01:37.278 apps: 00:01:37.278 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:37.278 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:37.278 test-pmd, test-regex, test-sad, test-security-perf, 00:01:37.278 00:01:37.278 Message: 00:01:37.278 ================= 00:01:37.278 Libraries Enabled 00:01:37.278 ================= 00:01:37.278 00:01:37.278 libs: 00:01:37.278 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:37.278 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:37.278 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:37.278 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:37.278 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:37.278 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:37.278 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:37.278 graph, node, 00:01:37.278 00:01:37.278 Message: 00:01:37.278 =============== 00:01:37.278 Drivers Enabled 00:01:37.278 =============== 00:01:37.278 00:01:37.278 common: 00:01:37.278 00:01:37.278 bus: 00:01:37.278 pci, vdev, 00:01:37.278 mempool: 00:01:37.278 ring, 00:01:37.278 dma: 00:01:37.278 00:01:37.278 net: 00:01:37.278 i40e, 00:01:37.278 raw: 00:01:37.278 00:01:37.278 crypto: 00:01:37.278 00:01:37.278 compress: 00:01:37.278 00:01:37.278 regex: 00:01:37.278 00:01:37.278 ml: 00:01:37.278 00:01:37.278 vdpa: 00:01:37.278 00:01:37.278 event: 00:01:37.278 00:01:37.278 baseband: 00:01:37.278 00:01:37.278 gpu: 00:01:37.278 00:01:37.278 00:01:37.278 Message: 00:01:37.278 ================= 00:01:37.278 Content Skipped 00:01:37.278 ================= 00:01:37.278 00:01:37.278 apps: 00:01:37.278 00:01:37.278 libs: 00:01:37.278 00:01:37.279 drivers: 00:01:37.279 common/cpt: not in enabled drivers build config 00:01:37.279 common/dpaax: not in enabled drivers build config 00:01:37.279 common/iavf: not in enabled drivers build config 00:01:37.279 common/idpf: not in enabled drivers build config 00:01:37.279 common/ionic: not in enabled drivers build config 00:01:37.279 common/mvep: not in enabled drivers build config 00:01:37.279 common/octeontx: not in enabled drivers build config 00:01:37.279 bus/auxiliary: not in enabled drivers build config 00:01:37.279 bus/cdx: not in enabled drivers build config 00:01:37.279 bus/dpaa: not in enabled drivers build config 00:01:37.279 bus/fslmc: not in enabled drivers build config 00:01:37.279 bus/ifpga: not in enabled drivers build config 00:01:37.279 bus/platform: not in enabled drivers build config 00:01:37.279 bus/uacce: not in enabled drivers build config 00:01:37.279 bus/vmbus: not in enabled drivers build config 00:01:37.279 common/cnxk: not in enabled drivers build config 00:01:37.279 common/mlx5: not in enabled drivers build config 00:01:37.279 common/nfp: not in enabled drivers build config 00:01:37.279 common/nitrox: not in enabled drivers build config 00:01:37.279 common/qat: not in enabled drivers build config 00:01:37.279 common/sfc_efx: not in enabled drivers build config 00:01:37.279 mempool/bucket: not in enabled drivers build config 00:01:37.279 mempool/cnxk: not in enabled drivers build config 00:01:37.279 mempool/dpaa: not in enabled drivers build config 00:01:37.279 mempool/dpaa2: not in enabled drivers build config 00:01:37.279 mempool/octeontx: not in enabled drivers build config 00:01:37.279 mempool/stack: not in enabled drivers build config 00:01:37.279 dma/cnxk: not in enabled drivers build config 00:01:37.279 dma/dpaa: not in enabled drivers build config 00:01:37.279 dma/dpaa2: not in enabled drivers build config 00:01:37.279 dma/hisilicon: not in enabled drivers build config 00:01:37.279 dma/idxd: not in enabled drivers build config 00:01:37.279 dma/ioat: not in enabled drivers build config 00:01:37.279 dma/odm: not in enabled drivers build config 00:01:37.279 dma/skeleton: not in enabled drivers build config 00:01:37.279 net/af_packet: not in enabled drivers build config 00:01:37.279 net/af_xdp: not in enabled drivers build config 00:01:37.279 net/ark: not in enabled drivers build config 00:01:37.279 net/atlantic: not in enabled drivers build config 00:01:37.279 net/avp: not in enabled drivers build config 00:01:37.279 net/axgbe: not in enabled drivers build config 00:01:37.279 net/bnx2x: not in enabled drivers build config 00:01:37.279 net/bnxt: not in enabled drivers build config 00:01:37.279 net/bonding: not in enabled drivers build config 00:01:37.279 net/cnxk: not in enabled drivers build config 00:01:37.279 net/cpfl: not in enabled drivers build config 00:01:37.279 net/cxgbe: not in enabled drivers build config 00:01:37.279 net/dpaa: not in enabled drivers build config 00:01:37.279 net/dpaa2: not in enabled drivers build config 00:01:37.279 net/e1000: not in enabled drivers build config 00:01:37.279 net/ena: not in enabled drivers build config 00:01:37.279 net/enetc: not in enabled drivers build config 00:01:37.279 net/enetfec: not in enabled drivers build config 00:01:37.279 net/enic: not in enabled drivers build config 00:01:37.279 net/failsafe: not in enabled drivers build config 00:01:37.279 net/fm10k: not in enabled drivers build config 00:01:37.279 net/gve: not in enabled drivers build config 00:01:37.279 net/hinic: not in enabled drivers build config 00:01:37.279 net/hns3: not in enabled drivers build config 00:01:37.279 net/iavf: not in enabled drivers build config 00:01:37.279 net/ice: not in enabled drivers build config 00:01:37.279 net/idpf: not in enabled drivers build config 00:01:37.279 net/igc: not in enabled drivers build config 00:01:37.279 net/ionic: not in enabled drivers build config 00:01:37.279 net/ipn3ke: not in enabled drivers build config 00:01:37.279 net/ixgbe: not in enabled drivers build config 00:01:37.279 net/mana: not in enabled drivers build config 00:01:37.279 net/memif: not in enabled drivers build config 00:01:37.279 net/mlx4: not in enabled drivers build config 00:01:37.279 net/mlx5: not in enabled drivers build config 00:01:37.279 net/mvneta: not in enabled drivers build config 00:01:37.279 net/mvpp2: not in enabled drivers build config 00:01:37.279 net/netvsc: not in enabled drivers build config 00:01:37.279 net/nfb: not in enabled drivers build config 00:01:37.279 net/nfp: not in enabled drivers build config 00:01:37.279 net/ngbe: not in enabled drivers build config 00:01:37.279 net/ntnic: not in enabled drivers build config 00:01:37.279 net/null: not in enabled drivers build config 00:01:37.279 net/octeontx: not in enabled drivers build config 00:01:37.279 net/octeon_ep: not in enabled drivers build config 00:01:37.279 net/pcap: not in enabled drivers build config 00:01:37.279 net/pfe: not in enabled drivers build config 00:01:37.279 net/qede: not in enabled drivers build config 00:01:37.279 net/ring: not in enabled drivers build config 00:01:37.279 net/sfc: not in enabled drivers build config 00:01:37.279 net/softnic: not in enabled drivers build config 00:01:37.279 net/tap: not in enabled drivers build config 00:01:37.279 net/thunderx: not in enabled drivers build config 00:01:37.279 net/txgbe: not in enabled drivers build config 00:01:37.279 net/vdev_netvsc: not in enabled drivers build config 00:01:37.279 net/vhost: not in enabled drivers build config 00:01:37.279 net/virtio: not in enabled drivers build config 00:01:37.279 net/vmxnet3: not in enabled drivers build config 00:01:37.279 raw/cnxk_bphy: not in enabled drivers build config 00:01:37.279 raw/cnxk_gpio: not in enabled drivers build config 00:01:37.279 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:37.279 raw/ifpga: not in enabled drivers build config 00:01:37.279 raw/ntb: not in enabled drivers build config 00:01:37.279 raw/skeleton: not in enabled drivers build config 00:01:37.279 crypto/armv8: not in enabled drivers build config 00:01:37.279 crypto/bcmfs: not in enabled drivers build config 00:01:37.279 crypto/caam_jr: not in enabled drivers build config 00:01:37.279 crypto/ccp: not in enabled drivers build config 00:01:37.279 crypto/cnxk: not in enabled drivers build config 00:01:37.279 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.279 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.279 crypto/ionic: not in enabled drivers build config 00:01:37.279 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.279 crypto/mlx5: not in enabled drivers build config 00:01:37.279 crypto/mvsam: not in enabled drivers build config 00:01:37.279 crypto/nitrox: not in enabled drivers build config 00:01:37.279 crypto/null: not in enabled drivers build config 00:01:37.279 crypto/octeontx: not in enabled drivers build config 00:01:37.279 crypto/openssl: not in enabled drivers build config 00:01:37.279 crypto/scheduler: not in enabled drivers build config 00:01:37.279 crypto/uadk: not in enabled drivers build config 00:01:37.279 crypto/virtio: not in enabled drivers build config 00:01:37.279 compress/isal: not in enabled drivers build config 00:01:37.279 compress/mlx5: not in enabled drivers build config 00:01:37.279 compress/nitrox: not in enabled drivers build config 00:01:37.279 compress/octeontx: not in enabled drivers build config 00:01:37.279 compress/uadk: not in enabled drivers build config 00:01:37.279 compress/zlib: not in enabled drivers build config 00:01:37.279 regex/mlx5: not in enabled drivers build config 00:01:37.279 regex/cn9k: not in enabled drivers build config 00:01:37.279 ml/cnxk: not in enabled drivers build config 00:01:37.279 vdpa/ifc: not in enabled drivers build config 00:01:37.279 vdpa/mlx5: not in enabled drivers build config 00:01:37.279 vdpa/nfp: not in enabled drivers build config 00:01:37.279 vdpa/sfc: not in enabled drivers build config 00:01:37.279 event/cnxk: not in enabled drivers build config 00:01:37.279 event/dlb2: not in enabled drivers build config 00:01:37.279 event/dpaa: not in enabled drivers build config 00:01:37.279 event/dpaa2: not in enabled drivers build config 00:01:37.279 event/dsw: not in enabled drivers build config 00:01:37.279 event/opdl: not in enabled drivers build config 00:01:37.279 event/skeleton: not in enabled drivers build config 00:01:37.279 event/sw: not in enabled drivers build config 00:01:37.279 event/octeontx: not in enabled drivers build config 00:01:37.279 baseband/acc: not in enabled drivers build config 00:01:37.279 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:37.279 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:37.279 baseband/la12xx: not in enabled drivers build config 00:01:37.279 baseband/null: not in enabled drivers build config 00:01:37.279 baseband/turbo_sw: not in enabled drivers build config 00:01:37.279 gpu/cuda: not in enabled drivers build config 00:01:37.279 00:01:37.279 00:01:37.279 Build targets in project: 219 00:01:37.279 00:01:37.279 DPDK 24.11.0-rc0 00:01:37.279 00:01:37.279 User defined options 00:01:37.279 libdir : lib 00:01:37.279 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.279 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:37.279 c_link_args : 00:01:37.279 enable_docs : false 00:01:37.279 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:37.279 enable_kmods : false 00:01:37.279 machine : native 00:01:37.279 tests : false 00:01:37.279 00:01:37.279 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.279 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:37.279 15:19:16 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:37.279 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:37.576 [1/718] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.576 [2/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.576 [3/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.576 [4/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.576 [5/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.576 [6/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.576 [7/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.576 [8/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.864 [9/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.864 [10/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.864 [11/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.864 [12/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.864 [13/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.864 [14/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.864 [15/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.136 [16/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.136 [17/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.136 [18/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.136 [19/718] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.136 [20/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.136 [21/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.136 [22/718] Linking static target lib/librte_kvargs.a 00:01:38.136 [23/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.403 [24/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.403 [25/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.403 [26/718] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.403 [27/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.403 [28/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.403 [29/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.403 [30/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:38.403 [31/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.403 [32/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.403 [33/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.403 [34/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.403 [35/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.403 [36/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.403 [37/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.403 [38/718] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.403 [39/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.403 [40/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.403 [41/718] Linking static target lib/librte_pci.a 00:01:38.403 [42/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.403 [43/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.403 [44/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.403 [45/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.403 [46/718] Linking static target lib/librte_log.a 00:01:38.403 [47/718] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.403 [48/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.403 [49/718] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.403 [50/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.666 [51/718] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:38.666 [52/718] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.666 [53/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.666 [54/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.666 [55/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.666 [56/718] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.666 [57/718] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.666 [58/718] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.666 [59/718] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.666 [60/718] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:38.666 [61/718] Linking static target lib/librte_ring.a 00:01:38.666 [62/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.666 [63/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.666 [64/718] Linking static target lib/librte_argparse.a 00:01:38.666 [65/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.666 [66/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.666 [67/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.666 [68/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.666 [69/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.666 [70/718] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:38.928 [71/718] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.928 [72/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:38.928 [73/718] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.928 [74/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.928 [75/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.928 [76/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.928 [77/718] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:38.928 [78/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.928 [79/718] Linking static target lib/librte_bitratestats.a 00:01:38.928 [80/718] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.928 [81/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.928 [82/718] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:38.928 [83/718] Linking static target lib/librte_cfgfile.a 00:01:38.928 [84/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.928 [85/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.928 [86/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.928 [87/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.928 [88/718] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.928 [89/718] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.928 [90/718] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.928 [91/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.928 [92/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.928 [93/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.928 [94/718] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.928 [95/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.928 [96/718] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.928 [97/718] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.928 [98/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.928 [99/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.928 [100/718] Linking static target lib/librte_net.a 00:01:39.192 [101/718] Linking static target lib/librte_compressdev.a 00:01:39.192 [102/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.192 [103/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:39.192 [104/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:39.192 [105/718] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:39.192 [106/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:39.192 [107/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.192 [108/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:39.192 [109/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.192 [110/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.192 [111/718] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.192 [112/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.192 [113/718] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.192 [114/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.192 [115/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.192 [116/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.192 [117/718] Linking static target lib/librte_meter.a 00:01:39.192 [118/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.192 [119/718] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:39.192 [120/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:39.192 [121/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.192 [122/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:39.192 [123/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.192 [124/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.192 [125/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:39.192 [126/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.192 [127/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:39.192 [128/718] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.192 [129/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.192 [130/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:39.192 [131/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.192 [132/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.192 [133/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:39.192 [134/718] Linking static target lib/librte_cmdline.a 00:01:39.192 [135/718] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:39.192 [136/718] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.192 [137/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:39.192 [138/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.192 [139/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:39.454 [140/718] Linking static target lib/librte_metrics.a 00:01:39.454 [141/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.454 [142/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.454 [143/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:39.454 [144/718] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:39.454 [145/718] Linking static target lib/librte_dmadev.a 00:01:39.454 [146/718] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:39.454 [147/718] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:39.454 [148/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:39.454 [149/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.454 [150/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.454 [151/718] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.454 [152/718] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.454 [153/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.454 [154/718] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:39.454 [155/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:39.454 [156/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.454 [157/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:39.454 [158/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.454 [159/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.454 [160/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:39.454 [161/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.454 [162/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:39.454 [163/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:39.455 [164/718] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:39.455 [165/718] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.455 [166/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:39.455 [167/718] Linking static target lib/librte_jobstats.a 00:01:39.455 [168/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:39.455 [169/718] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.455 [170/718] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.455 [171/718] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:39.455 [172/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.455 [173/718] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.455 [174/718] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:39.455 [175/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:39.455 [176/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.455 [177/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:39.455 [178/718] Linking static target lib/librte_timer.a 00:01:39.455 [179/718] Linking target lib/librte_log.so.25.0 00:01:39.455 [180/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.455 [181/718] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:39.455 [182/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.455 [183/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:39.455 [184/718] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:39.455 [185/718] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.455 [186/718] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:39.455 [187/718] Linking static target lib/librte_latencystats.a 00:01:39.455 [188/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:39.723 [189/718] Linking static target lib/librte_distributor.a 00:01:39.723 [190/718] Linking static target lib/librte_dispatcher.a 00:01:39.723 [191/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:39.723 [192/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:39.723 [193/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:39.723 [194/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:39.723 [195/718] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:39.723 [196/718] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:39.723 [197/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:39.723 [198/718] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.723 [199/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:39.723 [200/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:39.723 [201/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:39.723 [202/718] Linking static target lib/librte_bbdev.a 00:01:39.723 [203/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:39.723 [204/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:39.723 [205/718] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.723 [206/718] Linking static target lib/librte_gpudev.a 00:01:39.723 [207/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:39.723 [208/718] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:39.723 [209/718] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.723 [210/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.723 [211/718] Linking static target lib/librte_stack.a 00:01:39.723 [212/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.723 [213/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:39.723 [214/718] Linking static target lib/librte_gro.a 00:01:39.723 [215/718] Linking static target lib/librte_rcu.a 00:01:39.723 [216/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:39.723 [217/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:39.723 [218/718] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:39.723 [219/718] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.723 [220/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.723 [221/718] Linking static target lib/librte_regexdev.a 00:01:39.723 [222/718] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:39.723 [223/718] Linking static target lib/librte_mempool.a 00:01:39.723 [224/718] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:39.724 [225/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:39.724 [226/718] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:39.724 [227/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:39.724 [228/718] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.724 [229/718] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:01:39.724 [230/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.724 [231/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:39.724 [232/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:39.983 [233/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:39.983 [234/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:39.983 [235/718] Linking target lib/librte_argparse.so.25.0 00:01:39.983 [236/718] Linking target lib/librte_kvargs.so.25.0 00:01:39.983 [237/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.983 [238/718] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:39.983 [239/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.983 [240/718] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:39.983 [241/718] Linking static target lib/librte_rawdev.a 00:01:39.983 [242/718] Linking static target lib/librte_gso.a 00:01:39.983 [243/718] Linking static target lib/librte_eal.a 00:01:39.983 [244/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:39.983 [245/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:39.983 [246/718] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:39.983 [247/718] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.983 [248/718] Linking static target lib/librte_bpf.a 00:01:39.983 [249/718] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.983 [250/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:39.983 [251/718] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:39.983 [252/718] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.983 [253/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:39.983 [254/718] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.983 [255/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.983 [256/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:39.983 [257/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.983 [258/718] Linking static target lib/librte_pcapng.a 00:01:39.983 [259/718] Linking static target lib/librte_ip_frag.a 00:01:39.983 [260/718] Linking static target lib/librte_reorder.a 00:01:39.983 [261/718] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.983 [262/718] Linking static target lib/librte_telemetry.a 00:01:39.983 [263/718] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.983 [264/718] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.983 [265/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:39.983 [266/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.983 [267/718] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:01:39.983 [268/718] Linking static target lib/librte_power.a 00:01:39.983 [269/718] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.983 [270/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:39.983 [271/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:39.983 [272/718] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.983 [273/718] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.984 [274/718] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:39.984 [275/718] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.248 [276/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:40.248 [277/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.248 [278/718] Linking static target lib/librte_security.a 00:01:40.248 [279/718] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.248 [280/718] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.248 [281/718] Linking static target lib/librte_mldev.a 00:01:40.248 [282/718] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:40.248 [283/718] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.248 [284/718] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:40.248 [285/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:40.248 [286/718] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.248 [287/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:40.248 [288/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:40.248 [289/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:40.248 [290/718] Linking static target lib/librte_rib.a 00:01:40.248 [291/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:40.248 [292/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.248 [293/718] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:40.248 [294/718] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.248 [295/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:40.248 [296/718] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:40.248 [297/718] Linking static target lib/librte_lpm.a 00:01:40.248 [298/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:40.248 [299/718] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.248 [300/718] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:40.248 [301/718] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.248 [302/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:40.248 [303/718] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.248 [304/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.248 [305/718] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:40.248 [306/718] Linking static target lib/librte_mbuf.a 00:01:40.248 [307/718] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:40.248 [308/718] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.513 [309/718] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:40.513 [310/718] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:40.513 [311/718] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:40.513 [312/718] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:40.513 [313/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:40.513 [314/718] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.513 [315/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:40.513 [316/718] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:40.513 [317/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:40.513 [318/718] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:40.513 [319/718] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.513 [320/718] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:40.513 [321/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:40.513 [322/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:40.513 [323/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:40.513 [324/718] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.513 [325/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:40.513 [326/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:40.513 [327/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.513 [328/718] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:40.513 [329/718] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:40.514 [330/718] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:40.514 [331/718] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:40.514 [332/718] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.514 [333/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:40.514 [334/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:40.514 [335/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:40.514 [336/718] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.514 [337/718] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:40.514 [338/718] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:40.514 [339/718] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:40.514 [340/718] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:40.514 [341/718] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:40.514 [342/718] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:40.772 [343/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:40.772 [344/718] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:40.772 [345/718] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:40.772 [346/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:40.772 [347/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:40.772 [348/718] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:40.772 [349/718] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:40.772 [350/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.772 [351/718] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:40.772 [352/718] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.772 [353/718] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:40.772 [354/718] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:40.772 [355/718] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:40.772 [356/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:40.772 [357/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.772 [358/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:40.772 [359/718] Linking static target lib/librte_efd.a 00:01:40.772 [360/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:40.772 [361/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:40.772 [362/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:40.772 [363/718] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.772 [364/718] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:40.772 [365/718] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:40.772 [366/718] Linking static target lib/librte_graph.a 00:01:40.772 [367/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:40.772 [368/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.772 [369/718] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.772 [370/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.772 [371/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:40.772 [372/718] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.772 [373/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:41.034 [374/718] Linking static target lib/librte_fib.a 00:01:41.034 [375/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:41.034 [376/718] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.034 [377/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.034 [378/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:41.034 [379/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:41.034 [380/718] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:41.034 [381/718] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.034 [382/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:41.034 [383/718] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.034 [384/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.034 [385/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:41.034 [386/718] Linking static target lib/librte_pdump.a 00:01:41.034 [387/718] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:41.034 [388/718] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.034 [389/718] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:41.034 [390/718] Linking target lib/librte_telemetry.so.25.0 00:01:41.034 [391/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:41.034 [392/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:41.034 [393/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:41.034 [394/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:41.034 [395/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:41.034 [396/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.034 [397/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:41.034 [398/718] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:41.034 [399/718] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:41.034 [400/718] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.034 [401/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:41.034 [402/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.034 [403/718] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:41.299 [404/718] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:41.299 [405/718] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.299 [406/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:41.299 [407/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:41.299 [408/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:41.299 [409/718] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:41.299 [410/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.299 [411/718] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:01:41.299 [412/718] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [413/718] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:41.299 [414/718] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:41.299 [415/718] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:41.299 [416/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:41.299 [417/718] Linking static target lib/librte_cryptodev.a 00:01:41.299 [418/718] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [419/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:41.299 [420/718] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.299 [421/718] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [422/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:41.299 [423/718] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:41.299 [424/718] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.299 [425/718] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.299 [426/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:41.299 [427/718] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:41.299 [428/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:41.299 [429/718] Linking static target drivers/librte_bus_vdev.a 00:01:41.299 [430/718] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:41.299 [431/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:41.299 [432/718] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [433/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:41.299 [434/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:41.299 [435/718] Linking static target lib/librte_table.a 00:01:41.299 [436/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:41.299 [437/718] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [438/718] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:41.299 [439/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:41.299 [440/718] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:41.299 [441/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:41.299 [442/718] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.557 [443/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:41.557 [444/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:41.557 [445/718] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:41.557 [446/718] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:41.557 [447/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:41.557 [448/718] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:41.557 [449/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:41.557 [450/718] Linking static target lib/librte_sched.a 00:01:41.557 [451/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:41.557 [452/718] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.557 [453/718] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:41.557 [454/718] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:41.557 [455/718] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.557 [456/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:41.557 [457/718] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:41.557 [458/718] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:41.557 [459/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:41.557 [460/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:41.557 [461/718] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.557 [462/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:41.557 [463/718] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.557 [464/718] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.557 [465/718] Linking static target drivers/librte_bus_pci.a 00:01:41.557 [466/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:41.557 [467/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:41.557 [468/718] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:41.557 [469/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:41.557 [470/718] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:41.557 [471/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:41.557 [472/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:41.557 [473/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:41.557 [474/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:41.557 [475/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:41.557 [476/718] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.557 [477/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:41.557 [478/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:41.557 [479/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:41.557 [480/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:41.557 [481/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:41.557 [482/718] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.557 [483/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:41.557 [484/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:41.557 [485/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:41.557 [486/718] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.557 [487/718] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.557 [488/718] Linking static target drivers/librte_mempool_ring.a 00:01:41.557 [489/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:41.557 [490/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:41.557 [491/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:41.817 [492/718] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.817 [493/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:41.817 [494/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:41.817 [495/718] Linking static target lib/librte_member.a 00:01:41.817 [496/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:41.817 [497/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:41.817 [498/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:41.817 [499/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:41.817 [500/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:41.817 [501/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:41.817 [502/718] Linking static target lib/librte_pdcp.a 00:01:41.817 [503/718] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:41.817 [504/718] Linking static target lib/acl/libavx2_tmp.a 00:01:41.817 [505/718] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.817 [506/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:41.817 [507/718] Linking static target lib/librte_hash.a 00:01:41.817 [508/718] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:41.817 [509/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:41.817 [510/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:41.817 [511/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:41.817 [512/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:41.817 [513/718] Linking static target lib/librte_ipsec.a 00:01:41.817 [514/718] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:41.817 [515/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:41.817 [516/718] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:41.817 [517/718] Linking static target lib/librte_node.a 00:01:41.817 [518/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:41.817 [519/718] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:41.817 [520/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:41.817 [521/718] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:41.817 [522/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:41.817 [523/718] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:41.817 [524/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:41.817 [525/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:41.817 [526/718] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:41.817 [527/718] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.817 [528/718] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:41.817 [529/718] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:41.817 [530/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:41.817 [531/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:41.817 [532/718] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:42.077 [533/718] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:42.077 [534/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:42.077 [535/718] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:42.077 [536/718] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:42.077 [537/718] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:42.077 [538/718] Linking static target lib/librte_port.a 00:01:42.077 [539/718] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:42.077 [540/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:42.077 [541/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:42.077 [542/718] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:42.077 [543/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:42.077 [544/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:42.077 [545/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:42.077 [546/718] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:42.077 [547/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:42.077 [548/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:42.077 [549/718] Linking static target lib/librte_eventdev.a 00:01:42.077 [550/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:42.077 [551/718] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:42.077 [552/718] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:42.077 [553/718] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.077 [554/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:42.077 [555/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:42.077 [556/718] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:42.077 [557/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:42.077 [558/718] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:42.077 [559/718] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.077 [560/718] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:42.338 [561/718] Linking static target lib/librte_acl.a 00:01:42.338 [562/718] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.338 [563/718] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:42.338 [564/718] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:42.338 [565/718] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.338 [566/718] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.338 [567/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:42.338 [568/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:42.338 [569/718] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.338 [570/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:42.338 [571/718] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:42.338 [572/718] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:42.338 [573/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:42.338 [574/718] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.338 [575/718] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:42.599 [576/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:42.599 [577/718] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:42.599 [578/718] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:42.599 [579/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:42.599 [580/718] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.862 [581/718] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.862 [582/718] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:42.862 [583/718] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.862 [584/718] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:43.123 [585/718] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.123 [586/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:43.123 [587/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:43.384 [588/718] Linking static target lib/librte_ethdev.a 00:01:43.384 [589/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:43.646 [590/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:43.646 [591/718] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:43.908 [592/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:43.908 [593/718] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:44.169 [594/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:44.169 [595/718] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:44.430 [596/718] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:44.430 [597/718] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:44.430 [598/718] Linking static target drivers/librte_net_i40e.a 00:01:44.430 [599/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.372 [600/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:45.372 [601/718] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.372 [602/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:45.633 [603/718] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.842 [604/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:49.842 [605/718] Linking static target lib/librte_pipeline.a 00:01:51.754 [606/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:51.754 [607/718] Linking static target lib/librte_vhost.a 00:01:51.754 [608/718] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.754 [609/718] Linking target lib/librte_eal.so.25.0 00:01:51.754 [610/718] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:01:51.754 [611/718] Linking target lib/librte_meter.so.25.0 00:01:51.754 [612/718] Linking target lib/librte_ring.so.25.0 00:01:51.754 [613/718] Linking target lib/librte_pci.so.25.0 00:01:51.754 [614/718] Linking target lib/librte_jobstats.so.25.0 00:01:51.754 [615/718] Linking target lib/librte_timer.so.25.0 00:01:51.754 [616/718] Linking target lib/librte_cfgfile.so.25.0 00:01:51.754 [617/718] Linking target lib/librte_stack.so.25.0 00:01:51.754 [618/718] Linking target lib/librte_dmadev.so.25.0 00:01:51.754 [619/718] Linking target lib/librte_rawdev.so.25.0 00:01:51.754 [620/718] Linking target drivers/librte_bus_vdev.so.25.0 00:01:51.754 [621/718] Linking target lib/librte_acl.so.25.0 00:01:51.754 [622/718] Linking target app/dpdk-test-fib 00:01:51.754 [623/718] Linking target app/dpdk-test-dma-perf 00:01:51.754 [624/718] Linking target app/dpdk-test-compress-perf 00:01:51.754 [625/718] Linking target app/dpdk-dumpcap 00:01:51.754 [626/718] Linking target app/dpdk-test-regex 00:01:51.754 [627/718] Linking target app/dpdk-test-sad 00:01:51.754 [628/718] Linking target app/dpdk-test-flow-perf 00:01:51.754 [629/718] Linking target app/dpdk-test-security-perf 00:01:51.754 [630/718] Linking target app/dpdk-testpmd 00:01:51.754 [631/718] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:01:51.754 [632/718] Linking target app/dpdk-pdump 00:01:51.754 [633/718] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:01:51.754 [634/718] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:01:51.754 [635/718] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:01:51.754 [636/718] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:01:51.754 [637/718] Linking target app/dpdk-test-cmdline 00:01:51.754 [638/718] Linking target app/dpdk-test-acl 00:01:51.754 [639/718] Linking target app/dpdk-test-gpudev 00:01:52.013 [640/718] Linking target app/dpdk-test-bbdev 00:01:52.013 [641/718] Linking target app/dpdk-test-pipeline 00:01:52.013 [642/718] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:01:52.013 [643/718] Linking target app/dpdk-proc-info 00:01:52.013 [644/718] Linking target app/dpdk-test-mldev 00:01:52.013 [645/718] Linking target app/dpdk-graph 00:01:52.013 [646/718] Linking target app/dpdk-test-crypto-perf 00:01:52.013 [647/718] Linking target app/dpdk-test-eventdev 00:01:52.013 [648/718] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:01:52.013 [649/718] Linking target lib/librte_rcu.so.25.0 00:01:52.013 [650/718] Linking target lib/librte_mempool.so.25.0 00:01:52.013 [651/718] Linking target drivers/librte_bus_pci.so.25.0 00:01:52.013 [652/718] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:01:52.013 [653/718] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:01:52.013 [654/718] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:01:52.013 [655/718] Linking target drivers/librte_mempool_ring.so.25.0 00:01:52.013 [656/718] Linking target lib/librte_rib.so.25.0 00:01:52.013 [657/718] Linking target lib/librte_mbuf.so.25.0 00:01:52.274 [658/718] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:01:52.274 [659/718] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:01:52.274 [660/718] Linking target lib/librte_net.so.25.0 00:01:52.274 [661/718] Linking target lib/librte_fib.so.25.0 00:01:52.274 [662/718] Linking target lib/librte_mldev.so.25.0 00:01:52.274 [663/718] Linking target lib/librte_bbdev.so.25.0 00:01:52.274 [664/718] Linking target lib/librte_regexdev.so.25.0 00:01:52.274 [665/718] Linking target lib/librte_distributor.so.25.0 00:01:52.274 [666/718] Linking target lib/librte_compressdev.so.25.0 00:01:52.274 [667/718] Linking target lib/librte_gpudev.so.25.0 00:01:52.274 [668/718] Linking target lib/librte_cryptodev.so.25.0 00:01:52.274 [669/718] Linking target lib/librte_reorder.so.25.0 00:01:52.274 [670/718] Linking target lib/librte_sched.so.25.0 00:01:52.274 [671/718] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.535 [672/718] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:01:52.535 [673/718] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:01:52.535 [674/718] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:01:52.535 [675/718] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:01:52.535 [676/718] Linking target lib/librte_security.so.25.0 00:01:52.535 [677/718] Linking target lib/librte_cmdline.so.25.0 00:01:52.535 [678/718] Linking target lib/librte_hash.so.25.0 00:01:52.535 [679/718] Linking target lib/librte_ethdev.so.25.0 00:01:52.535 [680/718] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:01:52.795 [681/718] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:01:52.795 [682/718] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:01:52.795 [683/718] Linking target lib/librte_pdcp.so.25.0 00:01:52.795 [684/718] Linking target lib/librte_efd.so.25.0 00:01:52.795 [685/718] Linking target lib/librte_lpm.so.25.0 00:01:52.795 [686/718] Linking target lib/librte_member.so.25.0 00:01:52.795 [687/718] Linking target lib/librte_ipsec.so.25.0 00:01:52.795 [688/718] Linking target lib/librte_metrics.so.25.0 00:01:52.795 [689/718] Linking target lib/librte_ip_frag.so.25.0 00:01:52.795 [690/718] Linking target lib/librte_gro.so.25.0 00:01:52.795 [691/718] Linking target lib/librte_pcapng.so.25.0 00:01:52.795 [692/718] Linking target lib/librte_gso.so.25.0 00:01:52.795 [693/718] Linking target lib/librte_bpf.so.25.0 00:01:52.795 [694/718] Linking target lib/librte_eventdev.so.25.0 00:01:52.795 [695/718] Linking target lib/librte_power.so.25.0 00:01:52.795 [696/718] Linking target drivers/librte_net_i40e.so.25.0 00:01:52.795 [697/718] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:01:52.795 [698/718] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:01:52.795 [699/718] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:01:52.795 [700/718] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:01:52.795 [701/718] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:01:52.795 [702/718] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:01:52.795 [703/718] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:01:53.057 [704/718] Linking target lib/librte_dispatcher.so.25.0 00:01:53.057 [705/718] Linking target lib/librte_port.so.25.0 00:01:53.057 [706/718] Linking target lib/librte_pdump.so.25.0 00:01:53.057 [707/718] Linking target lib/librte_graph.so.25.0 00:01:53.057 [708/718] Linking target lib/librte_bitratestats.so.25.0 00:01:53.057 [709/718] Linking target lib/librte_latencystats.so.25.0 00:01:53.057 [710/718] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:01:53.057 [711/718] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:01:53.057 [712/718] Linking target lib/librte_node.so.25.0 00:01:53.057 [713/718] Linking target lib/librte_table.so.25.0 00:01:53.319 [714/718] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:01:53.580 [715/718] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.580 [716/718] Linking target lib/librte_vhost.so.25.0 00:01:54.968 [717/718] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.229 [718/718] Linking target lib/librte_pipeline.so.25.0 00:01:55.229 15:19:34 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:01:55.229 15:19:34 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:55.229 15:19:34 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:55.229 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:55.229 [0/1] Installing files. 00:01:55.497 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:55.497 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:55.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:55.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:55.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:55.502 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.502 Installing lib/librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.502 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.502 Installing lib/librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.502 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.502 Installing lib/librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.502 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.502 Installing lib/librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.503 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing lib/librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing drivers/librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:55.769 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing drivers/librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:55.769 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing drivers/librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:55.769 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.769 Installing drivers/librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:55.769 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.769 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.770 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.771 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.772 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:55.773 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:55.773 Installing symlink pointing to librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.25 00:01:55.773 Installing symlink pointing to librte_log.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:55.773 Installing symlink pointing to librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.25 00:01:55.773 Installing symlink pointing to librte_kvargs.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:55.773 Installing symlink pointing to librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.25 00:01:55.773 Installing symlink pointing to librte_argparse.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:01:55.773 Installing symlink pointing to librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.25 00:01:55.773 Installing symlink pointing to librte_telemetry.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:55.773 Installing symlink pointing to librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.25 00:01:55.773 Installing symlink pointing to librte_eal.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:55.773 Installing symlink pointing to librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.25 00:01:55.773 Installing symlink pointing to librte_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:55.773 Installing symlink pointing to librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.25 00:01:55.773 Installing symlink pointing to librte_rcu.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:55.773 Installing symlink pointing to librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.25 00:01:55.773 Installing symlink pointing to librte_mempool.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:55.773 Installing symlink pointing to librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.25 00:01:55.773 Installing symlink pointing to librte_mbuf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:55.773 Installing symlink pointing to librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.25 00:01:55.773 Installing symlink pointing to librte_net.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:55.773 Installing symlink pointing to librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.25 00:01:55.773 Installing symlink pointing to librte_meter.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:55.773 Installing symlink pointing to librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.25 00:01:55.773 Installing symlink pointing to librte_ethdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:55.773 Installing symlink pointing to librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.25 00:01:55.774 Installing symlink pointing to librte_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:55.774 Installing symlink pointing to librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.25 00:01:55.774 Installing symlink pointing to librte_cmdline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:55.774 Installing symlink pointing to librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.25 00:01:55.774 Installing symlink pointing to librte_metrics.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:55.774 Installing symlink pointing to librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.25 00:01:55.774 Installing symlink pointing to librte_hash.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:55.774 Installing symlink pointing to librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.25 00:01:55.774 Installing symlink pointing to librte_timer.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:55.774 Installing symlink pointing to librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.25 00:01:55.774 Installing symlink pointing to librte_acl.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:55.774 Installing symlink pointing to librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.25 00:01:55.774 Installing symlink pointing to librte_bbdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:55.774 Installing symlink pointing to librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.25 00:01:55.774 Installing symlink pointing to librte_bitratestats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:55.774 Installing symlink pointing to librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.25 00:01:55.774 Installing symlink pointing to librte_bpf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:55.774 Installing symlink pointing to librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.25 00:01:55.774 Installing symlink pointing to librte_cfgfile.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:55.774 Installing symlink pointing to librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.25 00:01:55.774 Installing symlink pointing to librte_compressdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:55.774 Installing symlink pointing to librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.25 00:01:55.774 Installing symlink pointing to librte_cryptodev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:55.774 Installing symlink pointing to librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.25 00:01:55.774 Installing symlink pointing to librte_distributor.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:55.774 Installing symlink pointing to librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.25 00:01:55.774 Installing symlink pointing to librte_dmadev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:55.774 Installing symlink pointing to librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.25 00:01:55.774 Installing symlink pointing to librte_efd.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:55.774 Installing symlink pointing to librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.25 00:01:55.774 Installing symlink pointing to librte_eventdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:55.774 Installing symlink pointing to librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.25 00:01:55.774 Installing symlink pointing to librte_dispatcher.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:55.774 Installing symlink pointing to librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.25 00:01:55.774 Installing symlink pointing to librte_gpudev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:55.774 Installing symlink pointing to librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.25 00:01:55.774 Installing symlink pointing to librte_gro.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:55.774 Installing symlink pointing to librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.25 00:01:55.774 Installing symlink pointing to librte_gso.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:55.774 Installing symlink pointing to librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.25 00:01:55.774 Installing symlink pointing to librte_ip_frag.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:55.774 Installing symlink pointing to librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.25 00:01:55.774 Installing symlink pointing to librte_jobstats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:55.774 Installing symlink pointing to librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.25 00:01:55.774 Installing symlink pointing to librte_latencystats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:55.774 Installing symlink pointing to librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.25 00:01:55.774 Installing symlink pointing to librte_lpm.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:55.774 Installing symlink pointing to librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.25 00:01:55.774 Installing symlink pointing to librte_member.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:55.774 Installing symlink pointing to librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.25 00:01:55.774 Installing symlink pointing to librte_pcapng.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:55.774 Installing symlink pointing to librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.25 00:01:55.774 Installing symlink pointing to librte_power.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:55.774 Installing symlink pointing to librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.25 00:01:55.774 Installing symlink pointing to librte_rawdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:55.774 Installing symlink pointing to librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.25 00:01:55.774 Installing symlink pointing to librte_regexdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:55.774 Installing symlink pointing to librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.25 00:01:55.774 Installing symlink pointing to librte_mldev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:55.774 Installing symlink pointing to librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.25 00:01:55.774 Installing symlink pointing to librte_rib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:55.774 Installing symlink pointing to librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.25 00:01:55.774 Installing symlink pointing to librte_reorder.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:55.774 Installing symlink pointing to librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.25 00:01:55.774 Installing symlink pointing to librte_sched.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:55.774 Installing symlink pointing to librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.25 00:01:55.774 Installing symlink pointing to librte_security.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:55.774 Installing symlink pointing to librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.25 00:01:55.774 Installing symlink pointing to librte_stack.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:55.774 Installing symlink pointing to librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.25 00:01:55.774 Installing symlink pointing to librte_vhost.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:55.774 Installing symlink pointing to librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.25 00:01:55.774 Installing symlink pointing to librte_ipsec.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:55.774 Installing symlink pointing to librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.25 00:01:55.774 Installing symlink pointing to librte_pdcp.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:55.774 Installing symlink pointing to librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.25 00:01:55.774 Installing symlink pointing to librte_fib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:55.774 Installing symlink pointing to librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.25 00:01:55.774 Installing symlink pointing to librte_port.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:55.774 Installing symlink pointing to librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.25 00:01:55.774 Installing symlink pointing to librte_pdump.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:55.774 Installing symlink pointing to librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.25 00:01:55.774 Installing symlink pointing to librte_table.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:55.774 Installing symlink pointing to librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.25 00:01:55.774 Installing symlink pointing to librte_pipeline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:55.774 Installing symlink pointing to librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.25 00:01:55.774 Installing symlink pointing to librte_graph.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:55.774 Installing symlink pointing to librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.25 00:01:55.774 Installing symlink pointing to librte_node.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:55.774 Installing symlink pointing to librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:01:55.774 Installing symlink pointing to librte_bus_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:01:55.774 Installing symlink pointing to librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:01:55.774 Installing symlink pointing to librte_bus_vdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:01:55.774 Installing symlink pointing to librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:01:56.036 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:01:56.036 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:01:56.036 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:01:56.036 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:01:56.036 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:01:56.036 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:01:56.036 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:01:56.036 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:01:56.036 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:01:56.036 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:01:56.036 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:01:56.036 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:01:56.036 Installing symlink pointing to librte_mempool_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:01:56.036 Installing symlink pointing to librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:01:56.036 Installing symlink pointing to librte_net_i40e.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:01:56.036 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:01:56.036 15:19:35 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:01:56.036 15:19:35 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:56.036 00:01:56.036 real 0m25.975s 00:01:56.036 user 7m45.425s 00:01:56.036 sys 4m36.400s 00:01:56.036 15:19:35 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:56.036 15:19:35 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:56.036 ************************************ 00:01:56.036 END TEST build_native_dpdk 00:01:56.036 ************************************ 00:01:56.036 15:19:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:56.036 15:19:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:56.036 15:19:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:56.036 15:19:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:56.036 15:19:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:56.036 15:19:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:56.036 15:19:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:56.036 15:19:35 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:56.036 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:56.297 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:56.297 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:56.297 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:56.870 Using 'verbs' RDMA provider 00:02:12.729 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:24.979 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:25.500 Creating mk/config.mk...done. 00:02:25.500 Creating mk/cc.flags.mk...done. 00:02:25.500 Type 'make' to build. 00:02:25.500 15:20:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:25.500 15:20:04 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:25.500 15:20:04 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:25.500 15:20:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.761 ************************************ 00:02:25.761 START TEST make 00:02:25.761 ************************************ 00:02:25.761 15:20:04 make -- common/autotest_common.sh@1125 -- $ make -j144 00:02:26.023 make[1]: Nothing to be done for 'all'. 00:02:27.940 The Meson build system 00:02:27.940 Version: 1.5.0 00:02:27.940 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:27.940 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.940 Build type: native build 00:02:27.940 Project name: libvfio-user 00:02:27.940 Project version: 0.0.1 00:02:27.940 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:27.940 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:27.940 Host machine cpu family: x86_64 00:02:27.940 Host machine cpu: x86_64 00:02:27.940 Run-time dependency threads found: YES 00:02:27.940 Library dl found: YES 00:02:27.940 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:27.940 Run-time dependency json-c found: YES 0.17 00:02:27.940 Run-time dependency cmocka found: YES 1.1.7 00:02:27.940 Program pytest-3 found: NO 00:02:27.940 Program flake8 found: NO 00:02:27.940 Program misspell-fixer found: NO 00:02:27.940 Program restructuredtext-lint found: NO 00:02:27.940 Program valgrind found: YES (/usr/bin/valgrind) 00:02:27.940 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.940 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.940 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.940 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:27.940 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:27.940 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:27.940 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:27.940 Build targets in project: 8 00:02:27.940 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:27.940 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:27.940 00:02:27.940 libvfio-user 0.0.1 00:02:27.940 00:02:27.940 User defined options 00:02:27.940 buildtype : debug 00:02:27.940 default_library: shared 00:02:27.940 libdir : /usr/local/lib 00:02:27.940 00:02:27.940 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.940 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:28.199 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:28.199 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:28.199 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:28.199 [4/37] Compiling C object samples/null.p/null.c.o 00:02:28.199 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:28.199 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:28.199 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:28.199 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:28.199 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:28.199 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:28.199 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:28.199 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:28.199 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:28.199 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:28.199 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:28.199 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:28.199 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:28.199 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:28.199 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:28.199 [20/37] Compiling C object samples/server.p/server.c.o 00:02:28.199 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:28.199 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:28.199 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:28.199 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:28.199 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:28.199 [26/37] Compiling C object samples/client.p/client.c.o 00:02:28.199 [27/37] Linking target samples/client 00:02:28.199 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:28.199 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:28.459 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:28.459 [31/37] Linking target test/unit_tests 00:02:28.459 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:28.459 [33/37] Linking target samples/lspci 00:02:28.459 [34/37] Linking target samples/server 00:02:28.459 [35/37] Linking target samples/null 00:02:28.459 [36/37] Linking target samples/gpio-pci-idio-16 00:02:28.459 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:28.459 INFO: autodetecting backend as ninja 00:02:28.459 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:28.720 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:28.980 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:28.980 ninja: no work to do. 00:02:55.559 CC lib/log/log.o 00:02:55.559 CC lib/ut/ut.o 00:02:55.559 CC lib/ut_mock/mock.o 00:02:55.559 CC lib/log/log_flags.o 00:02:55.559 CC lib/log/log_deprecated.o 00:02:55.559 LIB libspdk_ut.a 00:02:55.559 LIB libspdk_ut_mock.a 00:02:55.559 LIB libspdk_log.a 00:02:55.559 SO libspdk_ut.so.2.0 00:02:55.559 SO libspdk_ut_mock.so.6.0 00:02:55.559 SO libspdk_log.so.7.0 00:02:55.559 SYMLINK libspdk_ut.so 00:02:55.559 SYMLINK libspdk_ut_mock.so 00:02:55.559 SYMLINK libspdk_log.so 00:02:55.559 CC lib/dma/dma.o 00:02:55.559 CXX lib/trace_parser/trace.o 00:02:55.559 CC lib/ioat/ioat.o 00:02:55.559 CC lib/util/base64.o 00:02:55.559 CC lib/util/bit_array.o 00:02:55.559 CC lib/util/cpuset.o 00:02:55.559 CC lib/util/crc16.o 00:02:55.559 CC lib/util/crc32.o 00:02:55.559 CC lib/util/crc32c.o 00:02:55.559 CC lib/util/crc32_ieee.o 00:02:55.559 CC lib/util/crc64.o 00:02:55.559 CC lib/util/dif.o 00:02:55.559 CC lib/util/fd.o 00:02:55.559 CC lib/util/fd_group.o 00:02:55.559 CC lib/util/file.o 00:02:55.559 CC lib/util/hexlify.o 00:02:55.559 CC lib/util/iov.o 00:02:55.559 CC lib/util/math.o 00:02:55.559 CC lib/util/net.o 00:02:55.559 CC lib/util/pipe.o 00:02:55.559 CC lib/util/strerror_tls.o 00:02:55.559 CC lib/util/string.o 00:02:55.559 CC lib/util/uuid.o 00:02:55.559 CC lib/util/xor.o 00:02:55.559 CC lib/util/zipf.o 00:02:55.559 CC lib/util/md5.o 00:02:55.559 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.559 CC lib/vfio_user/host/vfio_user.o 00:02:55.559 LIB libspdk_dma.a 00:02:55.559 SO libspdk_dma.so.5.0 00:02:55.559 LIB libspdk_ioat.a 00:02:55.559 SYMLINK libspdk_dma.so 00:02:55.559 SO libspdk_ioat.so.7.0 00:02:55.559 SYMLINK libspdk_ioat.so 00:02:55.559 LIB libspdk_vfio_user.a 00:02:55.559 SO libspdk_vfio_user.so.5.0 00:02:55.559 LIB libspdk_util.a 00:02:55.559 SYMLINK libspdk_vfio_user.so 00:02:55.559 SO libspdk_util.so.10.0 00:02:55.559 SYMLINK libspdk_util.so 00:02:55.559 LIB libspdk_trace_parser.a 00:02:55.559 SO libspdk_trace_parser.so.6.0 00:02:55.559 SYMLINK libspdk_trace_parser.so 00:02:55.559 CC lib/json/json_parse.o 00:02:55.559 CC lib/rdma_utils/rdma_utils.o 00:02:55.559 CC lib/vmd/vmd.o 00:02:55.559 CC lib/json/json_util.o 00:02:55.559 CC lib/json/json_write.o 00:02:55.559 CC lib/vmd/led.o 00:02:55.559 CC lib/conf/conf.o 00:02:55.559 CC lib/idxd/idxd.o 00:02:55.559 CC lib/idxd/idxd_user.o 00:02:55.559 CC lib/idxd/idxd_kernel.o 00:02:55.559 CC lib/rdma_provider/common.o 00:02:55.559 CC lib/env_dpdk/env.o 00:02:55.559 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.559 CC lib/env_dpdk/memory.o 00:02:55.559 CC lib/env_dpdk/pci.o 00:02:55.559 CC lib/env_dpdk/init.o 00:02:55.559 CC lib/env_dpdk/threads.o 00:02:55.559 CC lib/env_dpdk/pci_ioat.o 00:02:55.559 CC lib/env_dpdk/pci_virtio.o 00:02:55.559 CC lib/env_dpdk/pci_vmd.o 00:02:55.559 CC lib/env_dpdk/pci_idxd.o 00:02:55.559 CC lib/env_dpdk/pci_event.o 00:02:55.559 CC lib/env_dpdk/sigbus_handler.o 00:02:55.559 CC lib/env_dpdk/pci_dpdk.o 00:02:55.559 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.559 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.559 LIB libspdk_rdma_provider.a 00:02:55.559 LIB libspdk_conf.a 00:02:55.559 SO libspdk_conf.so.6.0 00:02:55.559 SO libspdk_rdma_provider.so.6.0 00:02:55.559 LIB libspdk_rdma_utils.a 00:02:55.559 LIB libspdk_json.a 00:02:55.559 SO libspdk_rdma_utils.so.1.0 00:02:55.559 SO libspdk_json.so.6.0 00:02:55.559 SYMLINK libspdk_conf.so 00:02:55.559 SYMLINK libspdk_rdma_provider.so 00:02:55.559 SYMLINK libspdk_rdma_utils.so 00:02:55.559 SYMLINK libspdk_json.so 00:02:55.559 LIB libspdk_idxd.a 00:02:55.559 LIB libspdk_vmd.a 00:02:55.559 SO libspdk_idxd.so.12.1 00:02:55.559 SO libspdk_vmd.so.6.0 00:02:55.559 SYMLINK libspdk_idxd.so 00:02:55.559 SYMLINK libspdk_vmd.so 00:02:55.559 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.559 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.559 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.559 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.559 LIB libspdk_env_dpdk.a 00:02:55.559 SO libspdk_env_dpdk.so.15.0 00:02:55.559 LIB libspdk_jsonrpc.a 00:02:55.559 SO libspdk_jsonrpc.so.6.0 00:02:55.559 SYMLINK libspdk_env_dpdk.so 00:02:55.559 SYMLINK libspdk_jsonrpc.so 00:02:55.559 CC lib/rpc/rpc.o 00:02:55.559 LIB libspdk_rpc.a 00:02:55.559 SO libspdk_rpc.so.6.0 00:02:55.559 SYMLINK libspdk_rpc.so 00:02:55.821 CC lib/keyring/keyring.o 00:02:55.821 CC lib/keyring/keyring_rpc.o 00:02:55.821 CC lib/trace/trace.o 00:02:55.821 CC lib/trace/trace_flags.o 00:02:55.821 CC lib/notify/notify.o 00:02:55.821 CC lib/trace/trace_rpc.o 00:02:55.821 CC lib/notify/notify_rpc.o 00:02:55.821 LIB libspdk_notify.a 00:02:56.082 SO libspdk_notify.so.6.0 00:02:56.082 LIB libspdk_keyring.a 00:02:56.082 LIB libspdk_trace.a 00:02:56.082 SO libspdk_keyring.so.2.0 00:02:56.082 SO libspdk_trace.so.11.0 00:02:56.082 SYMLINK libspdk_notify.so 00:02:56.082 SYMLINK libspdk_keyring.so 00:02:56.082 SYMLINK libspdk_trace.so 00:02:56.343 CC lib/sock/sock.o 00:02:56.343 CC lib/thread/thread.o 00:02:56.343 CC lib/sock/sock_rpc.o 00:02:56.343 CC lib/thread/iobuf.o 00:02:56.915 LIB libspdk_sock.a 00:02:56.915 SO libspdk_sock.so.10.0 00:02:56.915 SYMLINK libspdk_sock.so 00:02:57.487 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.487 CC lib/nvme/nvme_ctrlr.o 00:02:57.487 CC lib/nvme/nvme_fabric.o 00:02:57.487 CC lib/nvme/nvme_ns_cmd.o 00:02:57.487 CC lib/nvme/nvme_ns.o 00:02:57.487 CC lib/nvme/nvme_pcie_common.o 00:02:57.487 CC lib/nvme/nvme_pcie.o 00:02:57.487 CC lib/nvme/nvme_qpair.o 00:02:57.487 CC lib/nvme/nvme.o 00:02:57.487 CC lib/nvme/nvme_quirks.o 00:02:57.488 CC lib/nvme/nvme_transport.o 00:02:57.488 CC lib/nvme/nvme_discovery.o 00:02:57.488 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.488 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.488 CC lib/nvme/nvme_tcp.o 00:02:57.488 CC lib/nvme/nvme_opal.o 00:02:57.488 CC lib/nvme/nvme_io_msg.o 00:02:57.488 CC lib/nvme/nvme_poll_group.o 00:02:57.488 CC lib/nvme/nvme_zns.o 00:02:57.488 CC lib/nvme/nvme_stubs.o 00:02:57.488 CC lib/nvme/nvme_auth.o 00:02:57.488 CC lib/nvme/nvme_cuse.o 00:02:57.488 CC lib/nvme/nvme_vfio_user.o 00:02:57.488 CC lib/nvme/nvme_rdma.o 00:02:57.749 LIB libspdk_thread.a 00:02:57.749 SO libspdk_thread.so.10.1 00:02:58.010 SYMLINK libspdk_thread.so 00:02:58.271 CC lib/blob/blobstore.o 00:02:58.271 CC lib/blob/request.o 00:02:58.271 CC lib/init/json_config.o 00:02:58.271 CC lib/init/subsystem.o 00:02:58.271 CC lib/blob/zeroes.o 00:02:58.271 CC lib/blob/blob_bs_dev.o 00:02:58.271 CC lib/fsdev/fsdev_io.o 00:02:58.271 CC lib/init/subsystem_rpc.o 00:02:58.271 CC lib/fsdev/fsdev.o 00:02:58.271 CC lib/init/rpc.o 00:02:58.271 CC lib/fsdev/fsdev_rpc.o 00:02:58.271 CC lib/accel/accel.o 00:02:58.271 CC lib/accel/accel_rpc.o 00:02:58.271 CC lib/virtio/virtio.o 00:02:58.271 CC lib/vfu_tgt/tgt_endpoint.o 00:02:58.271 CC lib/accel/accel_sw.o 00:02:58.271 CC lib/virtio/virtio_vhost_user.o 00:02:58.271 CC lib/vfu_tgt/tgt_rpc.o 00:02:58.271 CC lib/virtio/virtio_vfio_user.o 00:02:58.271 CC lib/virtio/virtio_pci.o 00:02:58.532 LIB libspdk_init.a 00:02:58.532 SO libspdk_init.so.6.0 00:02:58.792 LIB libspdk_virtio.a 00:02:58.792 LIB libspdk_vfu_tgt.a 00:02:58.792 SYMLINK libspdk_init.so 00:02:58.792 SO libspdk_virtio.so.7.0 00:02:58.792 SO libspdk_vfu_tgt.so.3.0 00:02:58.792 SYMLINK libspdk_virtio.so 00:02:58.792 SYMLINK libspdk_vfu_tgt.so 00:02:59.054 LIB libspdk_fsdev.a 00:02:59.054 SO libspdk_fsdev.so.1.0 00:02:59.054 CC lib/event/app.o 00:02:59.054 CC lib/event/reactor.o 00:02:59.054 CC lib/event/log_rpc.o 00:02:59.054 CC lib/event/app_rpc.o 00:02:59.054 CC lib/event/scheduler_static.o 00:02:59.054 SYMLINK libspdk_fsdev.so 00:02:59.313 LIB libspdk_accel.a 00:02:59.313 SO libspdk_accel.so.16.0 00:02:59.313 LIB libspdk_nvme.a 00:02:59.313 SYMLINK libspdk_accel.so 00:02:59.313 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:59.313 LIB libspdk_event.a 00:02:59.622 SO libspdk_nvme.so.14.0 00:02:59.622 SO libspdk_event.so.14.0 00:02:59.622 SYMLINK libspdk_event.so 00:02:59.622 SYMLINK libspdk_nvme.so 00:02:59.883 CC lib/bdev/bdev.o 00:02:59.883 CC lib/bdev/bdev_rpc.o 00:02:59.883 CC lib/bdev/bdev_zone.o 00:02:59.883 CC lib/bdev/part.o 00:02:59.883 CC lib/bdev/scsi_nvme.o 00:03:00.144 LIB libspdk_fuse_dispatcher.a 00:03:00.144 SO libspdk_fuse_dispatcher.so.1.0 00:03:00.144 SYMLINK libspdk_fuse_dispatcher.so 00:03:01.089 LIB libspdk_blob.a 00:03:01.089 SO libspdk_blob.so.11.0 00:03:01.089 SYMLINK libspdk_blob.so 00:03:01.351 CC lib/lvol/lvol.o 00:03:01.351 CC lib/blobfs/blobfs.o 00:03:01.351 CC lib/blobfs/tree.o 00:03:02.295 LIB libspdk_bdev.a 00:03:02.295 LIB libspdk_blobfs.a 00:03:02.295 SO libspdk_bdev.so.16.0 00:03:02.295 SO libspdk_blobfs.so.10.0 00:03:02.295 LIB libspdk_lvol.a 00:03:02.295 SYMLINK libspdk_blobfs.so 00:03:02.295 SO libspdk_lvol.so.10.0 00:03:02.295 SYMLINK libspdk_bdev.so 00:03:02.295 SYMLINK libspdk_lvol.so 00:03:02.557 CC lib/nvmf/ctrlr.o 00:03:02.557 CC lib/nbd/nbd.o 00:03:02.557 CC lib/nvmf/ctrlr_discovery.o 00:03:02.557 CC lib/nbd/nbd_rpc.o 00:03:02.557 CC lib/nvmf/ctrlr_bdev.o 00:03:02.557 CC lib/nvmf/subsystem.o 00:03:02.557 CC lib/scsi/dev.o 00:03:02.557 CC lib/ublk/ublk.o 00:03:02.557 CC lib/nvmf/nvmf.o 00:03:02.557 CC lib/scsi/port.o 00:03:02.557 CC lib/scsi/lun.o 00:03:02.557 CC lib/ublk/ublk_rpc.o 00:03:02.557 CC lib/nvmf/nvmf_rpc.o 00:03:02.557 CC lib/ftl/ftl_core.o 00:03:02.557 CC lib/nvmf/transport.o 00:03:02.557 CC lib/scsi/scsi.o 00:03:02.557 CC lib/nvmf/tcp.o 00:03:02.557 CC lib/ftl/ftl_init.o 00:03:02.557 CC lib/ftl/ftl_layout.o 00:03:02.557 CC lib/scsi/scsi_bdev.o 00:03:02.557 CC lib/nvmf/stubs.o 00:03:02.557 CC lib/scsi/scsi_pr.o 00:03:02.557 CC lib/nvmf/mdns_server.o 00:03:02.557 CC lib/nvmf/vfio_user.o 00:03:02.557 CC lib/ftl/ftl_debug.o 00:03:02.557 CC lib/scsi/scsi_rpc.o 00:03:02.557 CC lib/nvmf/rdma.o 00:03:02.831 CC lib/ftl/ftl_io.o 00:03:02.831 CC lib/nvmf/auth.o 00:03:02.831 CC lib/ftl/ftl_l2p.o 00:03:02.831 CC lib/scsi/task.o 00:03:02.831 CC lib/ftl/ftl_sb.o 00:03:02.831 CC lib/ftl/ftl_l2p_flat.o 00:03:02.831 CC lib/ftl/ftl_nv_cache.o 00:03:02.831 CC lib/ftl/ftl_band.o 00:03:02.831 CC lib/ftl/ftl_writer.o 00:03:02.831 CC lib/ftl/ftl_band_ops.o 00:03:02.831 CC lib/ftl/ftl_rq.o 00:03:02.831 CC lib/ftl/ftl_reloc.o 00:03:02.831 CC lib/ftl/ftl_l2p_cache.o 00:03:02.831 CC lib/ftl/ftl_p2l.o 00:03:02.831 CC lib/ftl/ftl_p2l_log.o 00:03:02.831 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.831 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.831 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.831 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.831 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.831 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.831 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.832 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.832 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.832 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.832 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.832 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.832 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.832 CC lib/ftl/utils/ftl_conf.o 00:03:02.832 CC lib/ftl/utils/ftl_md.o 00:03:02.832 CC lib/ftl/utils/ftl_mempool.o 00:03:02.832 CC lib/ftl/utils/ftl_property.o 00:03:02.832 CC lib/ftl/utils/ftl_bitmap.o 00:03:02.832 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:02.832 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:02.832 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:02.832 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:02.832 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:02.832 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:02.832 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:02.832 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:02.832 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.832 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.832 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:02.832 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:02.832 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:02.832 CC lib/ftl/base/ftl_base_dev.o 00:03:02.832 CC lib/ftl/ftl_trace.o 00:03:02.832 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.407 LIB libspdk_nbd.a 00:03:03.407 SO libspdk_nbd.so.7.0 00:03:03.407 SYMLINK libspdk_nbd.so 00:03:03.668 LIB libspdk_ublk.a 00:03:03.668 SO libspdk_ublk.so.3.0 00:03:03.668 LIB libspdk_scsi.a 00:03:03.668 SYMLINK libspdk_ublk.so 00:03:03.668 SO libspdk_scsi.so.9.0 00:03:03.929 SYMLINK libspdk_scsi.so 00:03:04.191 LIB libspdk_ftl.a 00:03:04.191 CC lib/iscsi/conn.o 00:03:04.191 CC lib/vhost/vhost.o 00:03:04.191 CC lib/iscsi/init_grp.o 00:03:04.191 CC lib/vhost/vhost_rpc.o 00:03:04.191 CC lib/iscsi/iscsi.o 00:03:04.191 CC lib/iscsi/param.o 00:03:04.191 CC lib/vhost/vhost_scsi.o 00:03:04.191 CC lib/iscsi/portal_grp.o 00:03:04.191 CC lib/vhost/vhost_blk.o 00:03:04.191 CC lib/iscsi/tgt_node.o 00:03:04.191 CC lib/vhost/rte_vhost_user.o 00:03:04.191 CC lib/iscsi/iscsi_subsystem.o 00:03:04.191 CC lib/iscsi/iscsi_rpc.o 00:03:04.191 CC lib/iscsi/task.o 00:03:04.191 SO libspdk_ftl.so.9.0 00:03:04.453 SYMLINK libspdk_ftl.so 00:03:05.026 LIB libspdk_nvmf.a 00:03:05.026 SO libspdk_nvmf.so.19.0 00:03:05.288 SYMLINK libspdk_nvmf.so 00:03:05.288 LIB libspdk_vhost.a 00:03:05.288 SO libspdk_vhost.so.8.0 00:03:05.288 SYMLINK libspdk_vhost.so 00:03:05.550 LIB libspdk_iscsi.a 00:03:05.550 SO libspdk_iscsi.so.8.0 00:03:05.550 SYMLINK libspdk_iscsi.so 00:03:06.123 CC module/vfu_device/vfu_virtio.o 00:03:06.123 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.123 CC module/vfu_device/vfu_virtio_blk.o 00:03:06.123 CC module/vfu_device/vfu_virtio_scsi.o 00:03:06.123 CC module/vfu_device/vfu_virtio_rpc.o 00:03:06.123 CC module/vfu_device/vfu_virtio_fs.o 00:03:06.386 CC module/accel/error/accel_error.o 00:03:06.386 CC module/accel/error/accel_error_rpc.o 00:03:06.386 LIB libspdk_env_dpdk_rpc.a 00:03:06.386 CC module/keyring/linux/keyring.o 00:03:06.386 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.386 CC module/keyring/linux/keyring_rpc.o 00:03:06.386 CC module/blob/bdev/blob_bdev.o 00:03:06.386 CC module/sock/posix/posix.o 00:03:06.386 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.386 CC module/scheduler/gscheduler/gscheduler.o 00:03:06.386 CC module/accel/iaa/accel_iaa.o 00:03:06.386 CC module/accel/ioat/accel_ioat.o 00:03:06.386 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.386 CC module/accel/dsa/accel_dsa.o 00:03:06.386 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.386 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.386 CC module/fsdev/aio/fsdev_aio.o 00:03:06.386 CC module/keyring/file/keyring_rpc.o 00:03:06.386 CC module/keyring/file/keyring.o 00:03:06.386 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:06.386 CC module/fsdev/aio/linux_aio_mgr.o 00:03:06.386 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.647 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.647 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.647 LIB libspdk_keyring_linux.a 00:03:06.647 LIB libspdk_keyring_file.a 00:03:06.647 LIB libspdk_scheduler_gscheduler.a 00:03:06.647 LIB libspdk_scheduler_dynamic.a 00:03:06.647 LIB libspdk_accel_error.a 00:03:06.647 LIB libspdk_accel_iaa.a 00:03:06.647 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:06.647 SO libspdk_keyring_file.so.2.0 00:03:06.647 SO libspdk_keyring_linux.so.1.0 00:03:06.647 LIB libspdk_accel_ioat.a 00:03:06.647 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.647 SO libspdk_scheduler_gscheduler.so.4.0 00:03:06.647 SO libspdk_accel_error.so.2.0 00:03:06.647 SO libspdk_accel_iaa.so.3.0 00:03:06.647 SO libspdk_accel_ioat.so.6.0 00:03:06.647 LIB libspdk_blob_bdev.a 00:03:06.647 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.647 SYMLINK libspdk_keyring_file.so 00:03:06.647 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.647 SYMLINK libspdk_keyring_linux.so 00:03:06.647 SYMLINK libspdk_accel_error.so 00:03:06.647 LIB libspdk_accel_dsa.a 00:03:06.647 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.908 SO libspdk_blob_bdev.so.11.0 00:03:06.908 SYMLINK libspdk_accel_iaa.so 00:03:06.908 SYMLINK libspdk_accel_ioat.so 00:03:06.908 SO libspdk_accel_dsa.so.5.0 00:03:06.908 LIB libspdk_vfu_device.a 00:03:06.908 SYMLINK libspdk_blob_bdev.so 00:03:06.908 SO libspdk_vfu_device.so.3.0 00:03:06.908 SYMLINK libspdk_accel_dsa.so 00:03:06.908 SYMLINK libspdk_vfu_device.so 00:03:07.169 LIB libspdk_fsdev_aio.a 00:03:07.169 SO libspdk_fsdev_aio.so.1.0 00:03:07.169 LIB libspdk_sock_posix.a 00:03:07.169 SO libspdk_sock_posix.so.6.0 00:03:07.169 SYMLINK libspdk_fsdev_aio.so 00:03:07.433 SYMLINK libspdk_sock_posix.so 00:03:07.433 CC module/bdev/raid/bdev_raid.o 00:03:07.433 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.433 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.433 CC module/bdev/raid/raid0.o 00:03:07.433 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.433 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.433 CC module/bdev/raid/raid1.o 00:03:07.433 CC module/bdev/raid/concat.o 00:03:07.433 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.433 CC module/bdev/gpt/gpt.o 00:03:07.433 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.433 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.433 CC module/bdev/malloc/bdev_malloc.o 00:03:07.433 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.433 CC module/bdev/null/bdev_null.o 00:03:07.433 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.433 CC module/bdev/delay/vbdev_delay.o 00:03:07.433 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.433 CC module/bdev/null/bdev_null_rpc.o 00:03:07.433 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.433 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.433 CC module/bdev/error/vbdev_error.o 00:03:07.433 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.433 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.433 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.433 CC module/bdev/nvme/bdev_nvme.o 00:03:07.433 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.433 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.433 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.433 CC module/bdev/aio/bdev_aio.o 00:03:07.433 CC module/bdev/nvme/nvme_rpc.o 00:03:07.433 CC module/bdev/split/vbdev_split.o 00:03:07.433 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.433 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.433 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.433 CC module/bdev/nvme/vbdev_opal.o 00:03:07.433 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.433 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.433 CC module/bdev/ftl/bdev_ftl.o 00:03:07.433 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.433 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.433 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.693 LIB libspdk_bdev_error.a 00:03:07.693 LIB libspdk_blobfs_bdev.a 00:03:07.693 LIB libspdk_bdev_split.a 00:03:07.693 LIB libspdk_bdev_gpt.a 00:03:07.693 SO libspdk_bdev_error.so.6.0 00:03:07.954 SO libspdk_blobfs_bdev.so.6.0 00:03:07.954 SO libspdk_bdev_split.so.6.0 00:03:07.954 SO libspdk_bdev_gpt.so.6.0 00:03:07.954 LIB libspdk_bdev_zone_block.a 00:03:07.954 SYMLINK libspdk_bdev_error.so 00:03:07.954 LIB libspdk_bdev_null.a 00:03:07.954 LIB libspdk_bdev_aio.a 00:03:07.954 LIB libspdk_bdev_passthru.a 00:03:07.954 SYMLINK libspdk_bdev_split.so 00:03:07.954 SYMLINK libspdk_blobfs_bdev.so 00:03:07.954 SO libspdk_bdev_zone_block.so.6.0 00:03:07.954 SYMLINK libspdk_bdev_gpt.so 00:03:07.954 LIB libspdk_bdev_delay.a 00:03:07.954 LIB libspdk_bdev_ftl.a 00:03:07.954 LIB libspdk_bdev_malloc.a 00:03:07.954 SO libspdk_bdev_null.so.6.0 00:03:07.954 SO libspdk_bdev_aio.so.6.0 00:03:07.954 SO libspdk_bdev_passthru.so.6.0 00:03:07.954 SO libspdk_bdev_delay.so.6.0 00:03:07.954 SO libspdk_bdev_ftl.so.6.0 00:03:07.954 SO libspdk_bdev_malloc.so.6.0 00:03:07.954 LIB libspdk_bdev_iscsi.a 00:03:07.954 SYMLINK libspdk_bdev_aio.so 00:03:07.954 SYMLINK libspdk_bdev_zone_block.so 00:03:07.954 SYMLINK libspdk_bdev_null.so 00:03:07.954 SO libspdk_bdev_iscsi.so.6.0 00:03:07.954 SYMLINK libspdk_bdev_passthru.so 00:03:07.954 SYMLINK libspdk_bdev_delay.so 00:03:07.954 SYMLINK libspdk_bdev_ftl.so 00:03:07.954 SYMLINK libspdk_bdev_malloc.so 00:03:07.954 LIB libspdk_bdev_virtio.a 00:03:08.215 SYMLINK libspdk_bdev_iscsi.so 00:03:08.215 LIB libspdk_bdev_lvol.a 00:03:08.215 SO libspdk_bdev_virtio.so.6.0 00:03:08.215 SO libspdk_bdev_lvol.so.6.0 00:03:08.215 SYMLINK libspdk_bdev_virtio.so 00:03:08.215 SYMLINK libspdk_bdev_lvol.so 00:03:08.476 LIB libspdk_bdev_raid.a 00:03:08.476 SO libspdk_bdev_raid.so.6.0 00:03:08.476 SYMLINK libspdk_bdev_raid.so 00:03:09.861 LIB libspdk_bdev_nvme.a 00:03:09.861 SO libspdk_bdev_nvme.so.7.0 00:03:09.861 SYMLINK libspdk_bdev_nvme.so 00:03:10.432 CC module/event/subsystems/vmd/vmd.o 00:03:10.432 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.432 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.432 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.432 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.432 CC module/event/subsystems/sock/sock.o 00:03:10.432 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:10.432 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.432 CC module/event/subsystems/fsdev/fsdev.o 00:03:10.432 CC module/event/subsystems/keyring/keyring.o 00:03:10.693 LIB libspdk_event_vhost_blk.a 00:03:10.693 LIB libspdk_event_vfu_tgt.a 00:03:10.693 LIB libspdk_event_keyring.a 00:03:10.693 LIB libspdk_event_vmd.a 00:03:10.693 LIB libspdk_event_scheduler.a 00:03:10.693 LIB libspdk_event_fsdev.a 00:03:10.693 LIB libspdk_event_sock.a 00:03:10.693 LIB libspdk_event_iobuf.a 00:03:10.693 SO libspdk_event_vhost_blk.so.3.0 00:03:10.693 SO libspdk_event_keyring.so.1.0 00:03:10.693 SO libspdk_event_vfu_tgt.so.3.0 00:03:10.693 SO libspdk_event_vmd.so.6.0 00:03:10.693 SO libspdk_event_fsdev.so.1.0 00:03:10.693 SO libspdk_event_sock.so.5.0 00:03:10.693 SO libspdk_event_scheduler.so.4.0 00:03:10.693 SO libspdk_event_iobuf.so.3.0 00:03:10.693 SYMLINK libspdk_event_vhost_blk.so 00:03:10.954 SYMLINK libspdk_event_keyring.so 00:03:10.954 SYMLINK libspdk_event_vfu_tgt.so 00:03:10.954 SYMLINK libspdk_event_fsdev.so 00:03:10.954 SYMLINK libspdk_event_scheduler.so 00:03:10.954 SYMLINK libspdk_event_vmd.so 00:03:10.954 SYMLINK libspdk_event_sock.so 00:03:10.954 SYMLINK libspdk_event_iobuf.so 00:03:11.215 CC module/event/subsystems/accel/accel.o 00:03:11.475 LIB libspdk_event_accel.a 00:03:11.475 SO libspdk_event_accel.so.6.0 00:03:11.475 SYMLINK libspdk_event_accel.so 00:03:11.736 CC module/event/subsystems/bdev/bdev.o 00:03:11.997 LIB libspdk_event_bdev.a 00:03:11.997 SO libspdk_event_bdev.so.6.0 00:03:12.257 SYMLINK libspdk_event_bdev.so 00:03:12.516 CC module/event/subsystems/scsi/scsi.o 00:03:12.516 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.516 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.516 CC module/event/subsystems/nbd/nbd.o 00:03:12.516 CC module/event/subsystems/ublk/ublk.o 00:03:12.778 LIB libspdk_event_nbd.a 00:03:12.778 LIB libspdk_event_ublk.a 00:03:12.778 LIB libspdk_event_scsi.a 00:03:12.778 SO libspdk_event_nbd.so.6.0 00:03:12.778 SO libspdk_event_ublk.so.3.0 00:03:12.778 SO libspdk_event_scsi.so.6.0 00:03:12.778 LIB libspdk_event_nvmf.a 00:03:12.778 SYMLINK libspdk_event_nbd.so 00:03:12.778 SYMLINK libspdk_event_ublk.so 00:03:12.778 SO libspdk_event_nvmf.so.6.0 00:03:12.778 SYMLINK libspdk_event_scsi.so 00:03:12.778 SYMLINK libspdk_event_nvmf.so 00:03:13.038 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.038 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.299 LIB libspdk_event_vhost_scsi.a 00:03:13.299 LIB libspdk_event_iscsi.a 00:03:13.299 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.299 SO libspdk_event_iscsi.so.6.0 00:03:13.561 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.561 SYMLINK libspdk_event_iscsi.so 00:03:13.561 SO libspdk.so.6.0 00:03:13.561 SYMLINK libspdk.so 00:03:14.136 CXX app/trace/trace.o 00:03:14.136 CC app/trace_record/trace_record.o 00:03:14.136 CC app/spdk_top/spdk_top.o 00:03:14.136 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.136 CC app/spdk_nvme_identify/identify.o 00:03:14.136 TEST_HEADER include/spdk/accel.h 00:03:14.136 CC app/spdk_lspci/spdk_lspci.o 00:03:14.136 TEST_HEADER include/spdk/accel_module.h 00:03:14.136 TEST_HEADER include/spdk/barrier.h 00:03:14.136 CC app/spdk_nvme_perf/perf.o 00:03:14.136 CC test/rpc_client/rpc_client_test.o 00:03:14.136 TEST_HEADER include/spdk/assert.h 00:03:14.136 TEST_HEADER include/spdk/base64.h 00:03:14.136 TEST_HEADER include/spdk/bdev.h 00:03:14.136 TEST_HEADER include/spdk/bdev_module.h 00:03:14.136 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.136 TEST_HEADER include/spdk/bit_array.h 00:03:14.136 TEST_HEADER include/spdk/bit_pool.h 00:03:14.136 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.136 TEST_HEADER include/spdk/blob.h 00:03:14.136 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.136 TEST_HEADER include/spdk/blobfs.h 00:03:14.136 TEST_HEADER include/spdk/config.h 00:03:14.136 TEST_HEADER include/spdk/conf.h 00:03:14.136 TEST_HEADER include/spdk/cpuset.h 00:03:14.136 CC app/spdk_dd/spdk_dd.o 00:03:14.136 TEST_HEADER include/spdk/crc16.h 00:03:14.136 TEST_HEADER include/spdk/crc32.h 00:03:14.136 TEST_HEADER include/spdk/crc64.h 00:03:14.136 TEST_HEADER include/spdk/dif.h 00:03:14.136 CC app/nvmf_tgt/nvmf_main.o 00:03:14.136 TEST_HEADER include/spdk/dma.h 00:03:14.136 TEST_HEADER include/spdk/endian.h 00:03:14.136 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.136 TEST_HEADER include/spdk/env.h 00:03:14.136 TEST_HEADER include/spdk/fd_group.h 00:03:14.136 TEST_HEADER include/spdk/event.h 00:03:14.136 TEST_HEADER include/spdk/file.h 00:03:14.136 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.136 TEST_HEADER include/spdk/fd.h 00:03:14.136 TEST_HEADER include/spdk/fsdev.h 00:03:14.136 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.136 TEST_HEADER include/spdk/ftl.h 00:03:14.136 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.136 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:14.136 TEST_HEADER include/spdk/histogram_data.h 00:03:14.136 TEST_HEADER include/spdk/hexlify.h 00:03:14.136 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.136 TEST_HEADER include/spdk/idxd.h 00:03:14.136 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.136 TEST_HEADER include/spdk/ioat.h 00:03:14.136 TEST_HEADER include/spdk/init.h 00:03:14.136 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.136 TEST_HEADER include/spdk/json.h 00:03:14.136 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.136 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.136 TEST_HEADER include/spdk/keyring.h 00:03:14.136 TEST_HEADER include/spdk/likely.h 00:03:14.136 TEST_HEADER include/spdk/keyring_module.h 00:03:14.137 TEST_HEADER include/spdk/log.h 00:03:14.137 TEST_HEADER include/spdk/lvol.h 00:03:14.137 TEST_HEADER include/spdk/md5.h 00:03:14.137 TEST_HEADER include/spdk/memory.h 00:03:14.137 TEST_HEADER include/spdk/mmio.h 00:03:14.137 CC app/spdk_tgt/spdk_tgt.o 00:03:14.137 TEST_HEADER include/spdk/nbd.h 00:03:14.137 TEST_HEADER include/spdk/net.h 00:03:14.137 TEST_HEADER include/spdk/notify.h 00:03:14.137 TEST_HEADER include/spdk/nvme.h 00:03:14.137 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.137 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.137 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.137 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.137 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.137 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.137 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.137 TEST_HEADER include/spdk/nvmf.h 00:03:14.137 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.137 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.137 TEST_HEADER include/spdk/opal_spec.h 00:03:14.137 TEST_HEADER include/spdk/opal.h 00:03:14.137 TEST_HEADER include/spdk/pci_ids.h 00:03:14.137 TEST_HEADER include/spdk/pipe.h 00:03:14.137 TEST_HEADER include/spdk/queue.h 00:03:14.137 TEST_HEADER include/spdk/scheduler.h 00:03:14.137 TEST_HEADER include/spdk/reduce.h 00:03:14.137 TEST_HEADER include/spdk/rpc.h 00:03:14.137 TEST_HEADER include/spdk/scsi.h 00:03:14.137 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.137 TEST_HEADER include/spdk/sock.h 00:03:14.137 TEST_HEADER include/spdk/stdinc.h 00:03:14.137 TEST_HEADER include/spdk/thread.h 00:03:14.137 TEST_HEADER include/spdk/string.h 00:03:14.137 TEST_HEADER include/spdk/trace.h 00:03:14.137 TEST_HEADER include/spdk/trace_parser.h 00:03:14.137 TEST_HEADER include/spdk/tree.h 00:03:14.137 TEST_HEADER include/spdk/ublk.h 00:03:14.137 TEST_HEADER include/spdk/util.h 00:03:14.137 TEST_HEADER include/spdk/uuid.h 00:03:14.137 TEST_HEADER include/spdk/version.h 00:03:14.137 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.137 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.137 TEST_HEADER include/spdk/vhost.h 00:03:14.137 TEST_HEADER include/spdk/vmd.h 00:03:14.137 TEST_HEADER include/spdk/xor.h 00:03:14.137 TEST_HEADER include/spdk/zipf.h 00:03:14.137 CXX test/cpp_headers/accel.o 00:03:14.137 CXX test/cpp_headers/accel_module.o 00:03:14.137 CXX test/cpp_headers/assert.o 00:03:14.137 CXX test/cpp_headers/base64.o 00:03:14.137 CXX test/cpp_headers/bdev.o 00:03:14.137 CXX test/cpp_headers/barrier.o 00:03:14.137 CXX test/cpp_headers/bdev_zone.o 00:03:14.137 CXX test/cpp_headers/bit_array.o 00:03:14.137 CXX test/cpp_headers/bit_pool.o 00:03:14.137 CXX test/cpp_headers/bdev_module.o 00:03:14.137 CXX test/cpp_headers/blob_bdev.o 00:03:14.137 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.137 CXX test/cpp_headers/blob.o 00:03:14.137 CXX test/cpp_headers/conf.o 00:03:14.137 CXX test/cpp_headers/blobfs.o 00:03:14.137 CXX test/cpp_headers/cpuset.o 00:03:14.137 CXX test/cpp_headers/crc16.o 00:03:14.137 CXX test/cpp_headers/config.o 00:03:14.137 CXX test/cpp_headers/crc32.o 00:03:14.137 CXX test/cpp_headers/crc64.o 00:03:14.137 CXX test/cpp_headers/dif.o 00:03:14.137 CXX test/cpp_headers/env.o 00:03:14.137 CXX test/cpp_headers/dma.o 00:03:14.137 CXX test/cpp_headers/event.o 00:03:14.137 CXX test/cpp_headers/fd_group.o 00:03:14.137 CXX test/cpp_headers/endian.o 00:03:14.413 CXX test/cpp_headers/env_dpdk.o 00:03:14.413 CXX test/cpp_headers/fd.o 00:03:14.413 CXX test/cpp_headers/file.o 00:03:14.413 CXX test/cpp_headers/fuse_dispatcher.o 00:03:14.413 CXX test/cpp_headers/fsdev.o 00:03:14.413 CXX test/cpp_headers/fsdev_module.o 00:03:14.413 CXX test/cpp_headers/ftl.o 00:03:14.413 CXX test/cpp_headers/idxd.o 00:03:14.413 CXX test/cpp_headers/hexlify.o 00:03:14.413 CXX test/cpp_headers/histogram_data.o 00:03:14.413 CXX test/cpp_headers/gpt_spec.o 00:03:14.413 CXX test/cpp_headers/idxd_spec.o 00:03:14.413 CXX test/cpp_headers/init.o 00:03:14.413 CXX test/cpp_headers/ioat.o 00:03:14.413 CXX test/cpp_headers/json.o 00:03:14.413 CXX test/cpp_headers/ioat_spec.o 00:03:14.413 CXX test/cpp_headers/iscsi_spec.o 00:03:14.413 CXX test/cpp_headers/jsonrpc.o 00:03:14.414 CXX test/cpp_headers/keyring_module.o 00:03:14.414 CXX test/cpp_headers/likely.o 00:03:14.414 CXX test/cpp_headers/log.o 00:03:14.414 CXX test/cpp_headers/keyring.o 00:03:14.414 CC test/thread/poller_perf/poller_perf.o 00:03:14.414 CXX test/cpp_headers/lvol.o 00:03:14.414 CC test/app/histogram_perf/histogram_perf.o 00:03:14.414 CXX test/cpp_headers/mmio.o 00:03:14.414 CXX test/cpp_headers/net.o 00:03:14.414 CXX test/cpp_headers/md5.o 00:03:14.414 LINK spdk_lspci 00:03:14.414 CXX test/cpp_headers/nbd.o 00:03:14.414 CXX test/cpp_headers/memory.o 00:03:14.414 CXX test/cpp_headers/nvme.o 00:03:14.414 CXX test/cpp_headers/notify.o 00:03:14.414 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.414 CC test/env/vtophys/vtophys.o 00:03:14.414 CXX test/cpp_headers/nvme_intel.o 00:03:14.414 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.414 CXX test/cpp_headers/nvme_zns.o 00:03:14.414 CXX test/cpp_headers/nvme_spec.o 00:03:14.414 CC examples/ioat/verify/verify.o 00:03:14.414 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.414 CC test/env/pci/pci_ut.o 00:03:14.414 CXX test/cpp_headers/nvmf_spec.o 00:03:14.414 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:14.414 CC examples/util/zipf/zipf.o 00:03:14.414 CXX test/cpp_headers/nvmf_transport.o 00:03:14.414 CXX test/cpp_headers/nvmf.o 00:03:14.414 CXX test/cpp_headers/opal.o 00:03:14.414 CXX test/cpp_headers/pipe.o 00:03:14.414 CXX test/cpp_headers/pci_ids.o 00:03:14.414 CXX test/cpp_headers/reduce.o 00:03:14.414 CXX test/cpp_headers/opal_spec.o 00:03:14.414 CXX test/cpp_headers/queue.o 00:03:14.414 CXX test/cpp_headers/rpc.o 00:03:14.414 CXX test/cpp_headers/scheduler.o 00:03:14.414 CXX test/cpp_headers/scsi_spec.o 00:03:14.414 CC app/fio/nvme/fio_plugin.o 00:03:14.414 CXX test/cpp_headers/scsi.o 00:03:14.414 CXX test/cpp_headers/sock.o 00:03:14.414 CC test/app/bdev_svc/bdev_svc.o 00:03:14.414 CC test/app/jsoncat/jsoncat.o 00:03:14.414 CXX test/cpp_headers/stdinc.o 00:03:14.414 CC examples/ioat/perf/perf.o 00:03:14.414 CXX test/cpp_headers/string.o 00:03:14.414 CXX test/cpp_headers/thread.o 00:03:14.414 CXX test/cpp_headers/tree.o 00:03:14.414 CXX test/cpp_headers/ublk.o 00:03:14.414 CXX test/cpp_headers/trace_parser.o 00:03:14.414 CC test/app/stub/stub.o 00:03:14.414 CXX test/cpp_headers/trace.o 00:03:14.414 LINK spdk_trace_record 00:03:14.414 CXX test/cpp_headers/util.o 00:03:14.414 CXX test/cpp_headers/version.o 00:03:14.414 CXX test/cpp_headers/uuid.o 00:03:14.414 CXX test/cpp_headers/vhost.o 00:03:14.414 CXX test/cpp_headers/vfio_user_pci.o 00:03:14.414 CC test/env/memory/memory_ut.o 00:03:14.414 CXX test/cpp_headers/vfio_user_spec.o 00:03:14.414 CXX test/cpp_headers/xor.o 00:03:14.414 CXX test/cpp_headers/vmd.o 00:03:14.414 LINK spdk_nvme_discover 00:03:14.414 LINK rpc_client_test 00:03:14.414 CXX test/cpp_headers/zipf.o 00:03:14.414 CC test/dma/test_dma/test_dma.o 00:03:14.414 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.414 CC app/fio/bdev/fio_plugin.o 00:03:14.692 LINK nvmf_tgt 00:03:14.968 LINK spdk_trace 00:03:15.252 LINK iscsi_tgt 00:03:15.252 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.252 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.252 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.252 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.252 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.252 LINK spdk_dd 00:03:15.252 LINK interrupt_tgt 00:03:15.252 LINK histogram_perf 00:03:15.252 LINK vtophys 00:03:15.516 LINK spdk_tgt 00:03:15.516 LINK ioat_perf 00:03:15.516 LINK verify 00:03:15.779 CC app/vhost/vhost.o 00:03:15.779 LINK stub 00:03:15.779 LINK zipf 00:03:15.779 LINK jsoncat 00:03:15.779 LINK spdk_nvme_identify 00:03:15.779 LINK pci_ut 00:03:15.779 LINK poller_perf 00:03:15.779 LINK nvme_fuzz 00:03:16.040 LINK bdev_svc 00:03:16.040 LINK vhost_fuzz 00:03:16.040 LINK env_dpdk_post_init 00:03:16.040 LINK vhost 00:03:16.040 LINK mem_callbacks 00:03:16.040 LINK test_dma 00:03:16.300 CC examples/idxd/perf/perf.o 00:03:16.300 LINK spdk_nvme 00:03:16.300 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.300 CC examples/vmd/led/led.o 00:03:16.300 LINK spdk_top 00:03:16.300 CC examples/sock/hello_world/hello_sock.o 00:03:16.300 LINK spdk_bdev 00:03:16.300 CC examples/thread/thread/thread_ex.o 00:03:16.300 CC test/event/reactor/reactor.o 00:03:16.300 CC test/event/event_perf/event_perf.o 00:03:16.300 CC test/event/reactor_perf/reactor_perf.o 00:03:16.300 CC test/event/app_repeat/app_repeat.o 00:03:16.300 CC test/event/scheduler/scheduler.o 00:03:16.562 LINK spdk_nvme_perf 00:03:16.562 LINK led 00:03:16.562 LINK lsvmd 00:03:16.562 LINK reactor 00:03:16.562 LINK event_perf 00:03:16.562 LINK reactor_perf 00:03:16.562 LINK idxd_perf 00:03:16.562 LINK app_repeat 00:03:16.562 LINK hello_sock 00:03:16.562 LINK thread 00:03:16.562 LINK scheduler 00:03:16.822 LINK memory_ut 00:03:16.822 CC test/nvme/err_injection/err_injection.o 00:03:16.822 CC test/nvme/sgl/sgl.o 00:03:16.822 CC test/nvme/e2edp/nvme_dp.o 00:03:16.822 CC test/nvme/boot_partition/boot_partition.o 00:03:16.822 CC test/nvme/reset/reset.o 00:03:16.822 CC test/nvme/connect_stress/connect_stress.o 00:03:16.822 CC test/nvme/aer/aer.o 00:03:16.822 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.823 CC test/nvme/startup/startup.o 00:03:16.823 CC test/nvme/simple_copy/simple_copy.o 00:03:16.823 CC test/nvme/compliance/nvme_compliance.o 00:03:16.823 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.823 CC test/nvme/reserve/reserve.o 00:03:16.823 CC test/nvme/fdp/fdp.o 00:03:16.823 CC test/nvme/overhead/overhead.o 00:03:16.823 CC test/accel/dif/dif.o 00:03:16.823 CC test/nvme/cuse/cuse.o 00:03:16.823 CC test/blobfs/mkfs/mkfs.o 00:03:17.084 CC test/lvol/esnap/esnap.o 00:03:17.084 LINK iscsi_fuzz 00:03:17.084 LINK boot_partition 00:03:17.084 LINK startup 00:03:17.084 LINK err_injection 00:03:17.084 LINK fused_ordering 00:03:17.084 LINK connect_stress 00:03:17.084 LINK doorbell_aers 00:03:17.084 LINK mkfs 00:03:17.084 LINK reset 00:03:17.084 LINK reserve 00:03:17.084 LINK simple_copy 00:03:17.084 LINK sgl 00:03:17.084 LINK nvme_dp 00:03:17.084 LINK aer 00:03:17.084 CC examples/nvme/hello_world/hello_world.o 00:03:17.084 LINK overhead 00:03:17.084 CC examples/nvme/reconnect/reconnect.o 00:03:17.084 CC examples/nvme/hotplug/hotplug.o 00:03:17.084 LINK nvme_compliance 00:03:17.084 CC examples/nvme/abort/abort.o 00:03:17.084 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.084 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.084 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:17.084 CC examples/nvme/arbitration/arbitration.o 00:03:17.084 LINK fdp 00:03:17.346 CC examples/accel/perf/accel_perf.o 00:03:17.346 CC examples/blob/cli/blobcli.o 00:03:17.346 CC examples/blob/hello_world/hello_blob.o 00:03:17.346 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:17.346 LINK pmr_persistence 00:03:17.346 LINK hello_world 00:03:17.346 LINK cmb_copy 00:03:17.346 LINK hotplug 00:03:17.608 LINK dif 00:03:17.608 LINK arbitration 00:03:17.608 LINK reconnect 00:03:17.608 LINK abort 00:03:17.608 LINK hello_blob 00:03:17.608 LINK hello_fsdev 00:03:17.608 LINK nvme_manage 00:03:17.608 LINK accel_perf 00:03:17.868 LINK blobcli 00:03:18.129 LINK cuse 00:03:18.129 CC test/bdev/bdevio/bdevio.o 00:03:18.390 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.390 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.390 LINK bdevio 00:03:18.651 LINK hello_bdev 00:03:19.222 LINK bdevperf 00:03:19.793 CC examples/nvmf/nvmf/nvmf.o 00:03:20.054 LINK nvmf 00:03:21.442 LINK esnap 00:03:22.013 00:03:22.013 real 0m56.178s 00:03:22.013 user 6m41.298s 00:03:22.013 sys 4m50.344s 00:03:22.013 15:21:01 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:22.013 15:21:01 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.013 ************************************ 00:03:22.013 END TEST make 00:03:22.013 ************************************ 00:03:22.013 15:21:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.013 15:21:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.013 15:21:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.013 15:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.013 15:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.013 15:21:01 -- pm/common@44 -- $ pid=2773744 00:03:22.013 15:21:01 -- pm/common@50 -- $ kill -TERM 2773744 00:03:22.013 15:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.013 15:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.013 15:21:01 -- pm/common@44 -- $ pid=2773745 00:03:22.013 15:21:01 -- pm/common@50 -- $ kill -TERM 2773745 00:03:22.013 15:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.013 15:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:22.013 15:21:01 -- pm/common@44 -- $ pid=2773747 00:03:22.013 15:21:01 -- pm/common@50 -- $ kill -TERM 2773747 00:03:22.013 15:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.013 15:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:22.013 15:21:01 -- pm/common@44 -- $ pid=2773771 00:03:22.013 15:21:01 -- pm/common@50 -- $ sudo -E kill -TERM 2773771 00:03:22.013 15:21:01 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:22.013 15:21:01 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:22.013 15:21:01 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:22.013 15:21:01 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:22.013 15:21:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.013 15:21:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.013 15:21:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.013 15:21:01 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.013 15:21:01 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.013 15:21:01 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.013 15:21:01 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.013 15:21:01 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.013 15:21:01 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.013 15:21:01 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.013 15:21:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.013 15:21:01 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.013 15:21:01 -- scripts/common.sh@345 -- # : 1 00:03:22.013 15:21:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.013 15:21:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.013 15:21:01 -- scripts/common.sh@365 -- # decimal 1 00:03:22.013 15:21:01 -- scripts/common.sh@353 -- # local d=1 00:03:22.013 15:21:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.013 15:21:01 -- scripts/common.sh@355 -- # echo 1 00:03:22.013 15:21:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.013 15:21:01 -- scripts/common.sh@366 -- # decimal 2 00:03:22.013 15:21:01 -- scripts/common.sh@353 -- # local d=2 00:03:22.013 15:21:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.013 15:21:01 -- scripts/common.sh@355 -- # echo 2 00:03:22.013 15:21:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.013 15:21:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.013 15:21:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.013 15:21:01 -- scripts/common.sh@368 -- # return 0 00:03:22.013 15:21:01 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.013 15:21:01 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.013 --rc genhtml_branch_coverage=1 00:03:22.013 --rc genhtml_function_coverage=1 00:03:22.013 --rc genhtml_legend=1 00:03:22.013 --rc geninfo_all_blocks=1 00:03:22.013 --rc geninfo_unexecuted_blocks=1 00:03:22.013 00:03:22.013 ' 00:03:22.013 15:21:01 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.013 --rc genhtml_branch_coverage=1 00:03:22.013 --rc genhtml_function_coverage=1 00:03:22.013 --rc genhtml_legend=1 00:03:22.013 --rc geninfo_all_blocks=1 00:03:22.013 --rc geninfo_unexecuted_blocks=1 00:03:22.013 00:03:22.013 ' 00:03:22.013 15:21:01 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.013 --rc genhtml_branch_coverage=1 00:03:22.013 --rc genhtml_function_coverage=1 00:03:22.013 --rc genhtml_legend=1 00:03:22.013 --rc geninfo_all_blocks=1 00:03:22.013 --rc geninfo_unexecuted_blocks=1 00:03:22.013 00:03:22.013 ' 00:03:22.013 15:21:01 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.013 --rc genhtml_branch_coverage=1 00:03:22.013 --rc genhtml_function_coverage=1 00:03:22.013 --rc genhtml_legend=1 00:03:22.013 --rc geninfo_all_blocks=1 00:03:22.013 --rc geninfo_unexecuted_blocks=1 00:03:22.013 00:03:22.013 ' 00:03:22.013 15:21:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:22.013 15:21:01 -- nvmf/common.sh@7 -- # uname -s 00:03:22.013 15:21:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.013 15:21:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.013 15:21:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.013 15:21:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.013 15:21:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.013 15:21:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.013 15:21:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.013 15:21:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.013 15:21:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.013 15:21:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.274 15:21:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:22.274 15:21:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:22.274 15:21:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.274 15:21:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.274 15:21:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:22.274 15:21:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.274 15:21:01 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:22.274 15:21:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.274 15:21:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.274 15:21:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.274 15:21:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.274 15:21:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.274 15:21:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.274 15:21:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.274 15:21:01 -- paths/export.sh@5 -- # export PATH 00:03:22.274 15:21:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.274 15:21:01 -- nvmf/common.sh@51 -- # : 0 00:03:22.274 15:21:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.274 15:21:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.274 15:21:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.274 15:21:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.274 15:21:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.274 15:21:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.274 15:21:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.274 15:21:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.274 15:21:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.274 15:21:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.274 15:21:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.274 15:21:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.274 15:21:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.274 15:21:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:22.274 15:21:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.274 15:21:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:22.274 15:21:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.274 15:21:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.274 15:21:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.274 15:21:01 -- spdk/autotest.sh@48 -- # udevadm_pid=2856617 00:03:22.274 15:21:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.274 15:21:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.274 15:21:01 -- pm/common@17 -- # local monitor 00:03:22.274 15:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.274 15:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.274 15:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.274 15:21:01 -- pm/common@21 -- # date +%s 00:03:22.274 15:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.274 15:21:01 -- pm/common@21 -- # date +%s 00:03:22.274 15:21:01 -- pm/common@25 -- # sleep 1 00:03:22.274 15:21:01 -- pm/common@21 -- # date +%s 00:03:22.274 15:21:01 -- pm/common@21 -- # date +%s 00:03:22.274 15:21:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727788861 00:03:22.274 15:21:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727788861 00:03:22.274 15:21:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727788861 00:03:22.274 15:21:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727788861 00:03:22.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727788861_collect-cpu-load.pm.log 00:03:22.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727788861_collect-vmstat.pm.log 00:03:22.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727788861_collect-cpu-temp.pm.log 00:03:22.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727788861_collect-bmc-pm.bmc.pm.log 00:03:23.216 15:21:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.216 15:21:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.216 15:21:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:23.216 15:21:02 -- common/autotest_common.sh@10 -- # set +x 00:03:23.216 15:21:02 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.216 15:21:02 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:23.216 15:21:02 -- common/autotest_common.sh@10 -- # set +x 00:03:23.216 15:21:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:23.216 15:21:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.216 15:21:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.216 15:21:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:23.216 15:21:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.216 15:21:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.216 15:21:02 -- common/autotest_common.sh@1455 -- # uname 00:03:23.216 15:21:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:23.216 15:21:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.216 15:21:02 -- common/autotest_common.sh@1475 -- # uname 00:03:23.216 15:21:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:23.216 15:21:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.216 15:21:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.476 lcov: LCOV version 1.15 00:03:23.476 15:21:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:38.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.666 15:21:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.666 15:21:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.666 15:21:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.666 15:21:32 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.666 15:21:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.092 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:57.092 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:57.092 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:57.376 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:57.376 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:57.656 15:21:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:57.656 15:21:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:57.656 15:21:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:57.656 15:21:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:57.656 15:21:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:57.656 15:21:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:57.656 15:21:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:57.656 15:21:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.656 15:21:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:57.656 15:21:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:57.656 15:21:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.656 15:21:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.656 15:21:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:57.656 15:21:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:57.656 15:21:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:57.656 No valid GPT data, bailing 00:03:57.656 15:21:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:57.656 15:21:36 -- scripts/common.sh@394 -- # pt= 00:03:57.656 15:21:36 -- scripts/common.sh@395 -- # return 1 00:03:57.656 15:21:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:57.656 1+0 records in 00:03:57.656 1+0 records out 00:03:57.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443478 s, 236 MB/s 00:03:57.656 15:21:36 -- spdk/autotest.sh@105 -- # sync 00:03:57.656 15:21:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:57.656 15:21:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:57.656 15:21:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:06.065 15:21:45 -- spdk/autotest.sh@111 -- # uname -s 00:04:06.065 15:21:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:06.065 15:21:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:06.065 15:21:45 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:09.496 Hugepages 00:04:09.496 node hugesize free / total 00:04:09.756 node0 1048576kB 0 / 0 00:04:09.756 node0 2048kB 0 / 0 00:04:09.756 node1 1048576kB 0 / 0 00:04:09.756 node1 2048kB 0 / 0 00:04:09.756 00:04:09.756 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.756 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:09.756 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:09.756 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:09.756 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:09.756 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:09.756 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:09.756 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:09.756 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:09.756 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:09.756 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:09.756 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:09.756 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:09.756 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:09.756 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:09.756 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:09.756 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:09.756 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:09.756 15:21:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:09.756 15:21:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:09.756 15:21:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:09.756 15:21:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.964 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.964 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:15.353 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:15.614 15:21:55 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:16.999 15:21:56 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:16.999 15:21:56 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:16.999 15:21:56 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.999 15:21:56 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:16.999 15:21:56 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:16.999 15:21:56 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:16.999 15:21:56 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.999 15:21:56 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.999 15:21:56 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:16.999 15:21:56 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:16.999 15:21:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:16.999 15:21:56 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.298 Waiting for block devices as requested 00:04:20.298 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:20.559 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:20.559 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:20.559 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:20.819 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:20.819 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:20.819 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:21.079 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:21.079 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:21.339 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:21.339 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:21.339 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:21.598 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:21.598 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:21.598 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:21.858 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:21.858 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:22.118 15:22:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:22.118 15:22:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:04:22.118 15:22:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:22.118 15:22:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:22.118 15:22:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:22.118 15:22:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:22.118 15:22:01 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:04:22.118 15:22:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:22.118 15:22:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:22.118 15:22:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:22.118 15:22:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:22.118 15:22:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:22.118 15:22:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:22.118 15:22:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:22.118 15:22:01 -- common/autotest_common.sh@1541 -- # continue 00:04:22.118 15:22:01 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:22.118 15:22:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.118 15:22:01 -- common/autotest_common.sh@10 -- # set +x 00:04:22.118 15:22:01 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:22.118 15:22:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.118 15:22:01 -- common/autotest_common.sh@10 -- # set +x 00:04:22.378 15:22:01 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.680 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:25.941 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:26.515 15:22:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:26.515 15:22:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.515 15:22:05 -- common/autotest_common.sh@10 -- # set +x 00:04:26.515 15:22:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:26.515 15:22:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:26.515 15:22:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.515 15:22:05 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:26.515 15:22:05 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:26.515 15:22:05 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:26.515 15:22:05 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:26.515 15:22:05 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:26.515 15:22:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:26.515 15:22:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:26.515 15:22:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.515 15:22:05 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:26.515 15:22:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:26.515 15:22:05 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:26.515 15:22:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:26.515 15:22:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:26.515 15:22:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:26.515 15:22:05 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:26.515 15:22:05 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:26.515 15:22:05 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:26.515 15:22:05 -- common/autotest_common.sh@1570 -- # return 0 00:04:26.515 15:22:05 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:26.515 15:22:05 -- common/autotest_common.sh@1578 -- # return 0 00:04:26.515 15:22:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:26.515 15:22:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:26.515 15:22:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:26.515 15:22:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:26.515 15:22:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:26.515 15:22:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.515 15:22:05 -- common/autotest_common.sh@10 -- # set +x 00:04:26.515 15:22:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:26.515 15:22:05 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:26.515 15:22:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.515 15:22:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.515 15:22:05 -- common/autotest_common.sh@10 -- # set +x 00:04:26.515 ************************************ 00:04:26.515 START TEST env 00:04:26.515 ************************************ 00:04:26.515 15:22:05 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:26.776 * Looking for test storage... 00:04:26.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.776 15:22:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.776 15:22:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.776 15:22:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.776 15:22:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.776 15:22:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.776 15:22:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.776 15:22:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.776 15:22:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.776 15:22:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.776 15:22:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.776 15:22:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.776 15:22:06 env -- scripts/common.sh@344 -- # case "$op" in 00:04:26.776 15:22:06 env -- scripts/common.sh@345 -- # : 1 00:04:26.776 15:22:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.776 15:22:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.776 15:22:06 env -- scripts/common.sh@365 -- # decimal 1 00:04:26.776 15:22:06 env -- scripts/common.sh@353 -- # local d=1 00:04:26.776 15:22:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.776 15:22:06 env -- scripts/common.sh@355 -- # echo 1 00:04:26.776 15:22:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.776 15:22:06 env -- scripts/common.sh@366 -- # decimal 2 00:04:26.776 15:22:06 env -- scripts/common.sh@353 -- # local d=2 00:04:26.776 15:22:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.776 15:22:06 env -- scripts/common.sh@355 -- # echo 2 00:04:26.776 15:22:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.776 15:22:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.776 15:22:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.776 15:22:06 env -- scripts/common.sh@368 -- # return 0 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.776 --rc genhtml_branch_coverage=1 00:04:26.776 --rc genhtml_function_coverage=1 00:04:26.776 --rc genhtml_legend=1 00:04:26.776 --rc geninfo_all_blocks=1 00:04:26.776 --rc geninfo_unexecuted_blocks=1 00:04:26.776 00:04:26.776 ' 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.776 --rc genhtml_branch_coverage=1 00:04:26.776 --rc genhtml_function_coverage=1 00:04:26.776 --rc genhtml_legend=1 00:04:26.776 --rc geninfo_all_blocks=1 00:04:26.776 --rc geninfo_unexecuted_blocks=1 00:04:26.776 00:04:26.776 ' 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.776 --rc genhtml_branch_coverage=1 00:04:26.776 --rc genhtml_function_coverage=1 00:04:26.776 --rc genhtml_legend=1 00:04:26.776 --rc geninfo_all_blocks=1 00:04:26.776 --rc geninfo_unexecuted_blocks=1 00:04:26.776 00:04:26.776 ' 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.776 --rc genhtml_branch_coverage=1 00:04:26.776 --rc genhtml_function_coverage=1 00:04:26.776 --rc genhtml_legend=1 00:04:26.776 --rc geninfo_all_blocks=1 00:04:26.776 --rc geninfo_unexecuted_blocks=1 00:04:26.776 00:04:26.776 ' 00:04:26.776 15:22:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.776 15:22:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.776 15:22:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.776 ************************************ 00:04:26.776 START TEST env_memory 00:04:26.776 ************************************ 00:04:26.776 15:22:06 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:26.776 00:04:26.776 00:04:26.776 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.776 http://cunit.sourceforge.net/ 00:04:26.776 00:04:26.776 00:04:26.776 Suite: memory 00:04:27.037 Test: alloc and free memory map ...[2024-10-01 15:22:06.233102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:27.037 passed 00:04:27.037 Test: mem map translation ...[2024-10-01 15:22:06.258643] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:27.037 [2024-10-01 15:22:06.258679] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:27.037 [2024-10-01 15:22:06.258728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:27.037 [2024-10-01 15:22:06.258735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:27.037 passed 00:04:27.037 Test: mem map registration ...[2024-10-01 15:22:06.313889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:27.037 [2024-10-01 15:22:06.313915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:27.037 passed 00:04:27.037 Test: mem map adjacent registrations ...passed 00:04:27.037 00:04:27.037 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.037 suites 1 1 n/a 0 0 00:04:27.037 tests 4 4 4 0 0 00:04:27.037 asserts 152 152 152 0 n/a 00:04:27.037 00:04:27.037 Elapsed time = 0.191 seconds 00:04:27.037 00:04:27.037 real 0m0.206s 00:04:27.037 user 0m0.193s 00:04:27.037 sys 0m0.013s 00:04:27.037 15:22:06 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.037 15:22:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:27.037 ************************************ 00:04:27.037 END TEST env_memory 00:04:27.037 ************************************ 00:04:27.037 15:22:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:27.037 15:22:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.037 15:22:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.037 15:22:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.037 ************************************ 00:04:27.037 START TEST env_vtophys 00:04:27.037 ************************************ 00:04:27.037 15:22:06 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:27.037 EAL: lib.eal log level changed from notice to debug 00:04:27.037 EAL: Detected lcore 0 as core 0 on socket 0 00:04:27.037 EAL: Detected lcore 1 as core 1 on socket 0 00:04:27.037 EAL: Detected lcore 2 as core 2 on socket 0 00:04:27.037 EAL: Detected lcore 3 as core 3 on socket 0 00:04:27.037 EAL: Detected lcore 4 as core 4 on socket 0 00:04:27.037 EAL: Detected lcore 5 as core 5 on socket 0 00:04:27.037 EAL: Detected lcore 6 as core 6 on socket 0 00:04:27.037 EAL: Detected lcore 7 as core 7 on socket 0 00:04:27.037 EAL: Detected lcore 8 as core 8 on socket 0 00:04:27.037 EAL: Detected lcore 9 as core 9 on socket 0 00:04:27.037 EAL: Detected lcore 10 as core 10 on socket 0 00:04:27.037 EAL: Detected lcore 11 as core 11 on socket 0 00:04:27.037 EAL: Detected lcore 12 as core 12 on socket 0 00:04:27.037 EAL: Detected lcore 13 as core 13 on socket 0 00:04:27.037 EAL: Detected lcore 14 as core 14 on socket 0 00:04:27.037 EAL: Detected lcore 15 as core 15 on socket 0 00:04:27.037 EAL: Detected lcore 16 as core 16 on socket 0 00:04:27.037 EAL: Detected lcore 17 as core 17 on socket 0 00:04:27.037 EAL: Detected lcore 18 as core 18 on socket 0 00:04:27.037 EAL: Detected lcore 19 as core 19 on socket 0 00:04:27.037 EAL: Detected lcore 20 as core 20 on socket 0 00:04:27.037 EAL: Detected lcore 21 as core 21 on socket 0 00:04:27.037 EAL: Detected lcore 22 as core 22 on socket 0 00:04:27.037 EAL: Detected lcore 23 as core 23 on socket 0 00:04:27.037 EAL: Detected lcore 24 as core 24 on socket 0 00:04:27.037 EAL: Detected lcore 25 as core 25 on socket 0 00:04:27.037 EAL: Detected lcore 26 as core 26 on socket 0 00:04:27.037 EAL: Detected lcore 27 as core 27 on socket 0 00:04:27.037 EAL: Detected lcore 28 as core 28 on socket 0 00:04:27.037 EAL: Detected lcore 29 as core 29 on socket 0 00:04:27.037 EAL: Detected lcore 30 as core 30 on socket 0 00:04:27.037 EAL: Detected lcore 31 as core 31 on socket 0 00:04:27.037 EAL: Detected lcore 32 as core 32 on socket 0 00:04:27.037 EAL: Detected lcore 33 as core 33 on socket 0 00:04:27.037 EAL: Detected lcore 34 as core 34 on socket 0 00:04:27.037 EAL: Detected lcore 35 as core 35 on socket 0 00:04:27.037 EAL: Detected lcore 36 as core 0 on socket 1 00:04:27.037 EAL: Detected lcore 37 as core 1 on socket 1 00:04:27.037 EAL: Detected lcore 38 as core 2 on socket 1 00:04:27.037 EAL: Detected lcore 39 as core 3 on socket 1 00:04:27.037 EAL: Detected lcore 40 as core 4 on socket 1 00:04:27.037 EAL: Detected lcore 41 as core 5 on socket 1 00:04:27.037 EAL: Detected lcore 42 as core 6 on socket 1 00:04:27.037 EAL: Detected lcore 43 as core 7 on socket 1 00:04:27.037 EAL: Detected lcore 44 as core 8 on socket 1 00:04:27.037 EAL: Detected lcore 45 as core 9 on socket 1 00:04:27.037 EAL: Detected lcore 46 as core 10 on socket 1 00:04:27.037 EAL: Detected lcore 47 as core 11 on socket 1 00:04:27.037 EAL: Detected lcore 48 as core 12 on socket 1 00:04:27.037 EAL: Detected lcore 49 as core 13 on socket 1 00:04:27.037 EAL: Detected lcore 50 as core 14 on socket 1 00:04:27.037 EAL: Detected lcore 51 as core 15 on socket 1 00:04:27.037 EAL: Detected lcore 52 as core 16 on socket 1 00:04:27.037 EAL: Detected lcore 53 as core 17 on socket 1 00:04:27.037 EAL: Detected lcore 54 as core 18 on socket 1 00:04:27.037 EAL: Detected lcore 55 as core 19 on socket 1 00:04:27.037 EAL: Detected lcore 56 as core 20 on socket 1 00:04:27.037 EAL: Detected lcore 57 as core 21 on socket 1 00:04:27.037 EAL: Detected lcore 58 as core 22 on socket 1 00:04:27.037 EAL: Detected lcore 59 as core 23 on socket 1 00:04:27.037 EAL: Detected lcore 60 as core 24 on socket 1 00:04:27.037 EAL: Detected lcore 61 as core 25 on socket 1 00:04:27.037 EAL: Detected lcore 62 as core 26 on socket 1 00:04:27.037 EAL: Detected lcore 63 as core 27 on socket 1 00:04:27.037 EAL: Detected lcore 64 as core 28 on socket 1 00:04:27.037 EAL: Detected lcore 65 as core 29 on socket 1 00:04:27.037 EAL: Detected lcore 66 as core 30 on socket 1 00:04:27.037 EAL: Detected lcore 67 as core 31 on socket 1 00:04:27.037 EAL: Detected lcore 68 as core 32 on socket 1 00:04:27.037 EAL: Detected lcore 69 as core 33 on socket 1 00:04:27.037 EAL: Detected lcore 70 as core 34 on socket 1 00:04:27.037 EAL: Detected lcore 71 as core 35 on socket 1 00:04:27.037 EAL: Detected lcore 72 as core 0 on socket 0 00:04:27.037 EAL: Detected lcore 73 as core 1 on socket 0 00:04:27.037 EAL: Detected lcore 74 as core 2 on socket 0 00:04:27.037 EAL: Detected lcore 75 as core 3 on socket 0 00:04:27.037 EAL: Detected lcore 76 as core 4 on socket 0 00:04:27.037 EAL: Detected lcore 77 as core 5 on socket 0 00:04:27.037 EAL: Detected lcore 78 as core 6 on socket 0 00:04:27.037 EAL: Detected lcore 79 as core 7 on socket 0 00:04:27.037 EAL: Detected lcore 80 as core 8 on socket 0 00:04:27.037 EAL: Detected lcore 81 as core 9 on socket 0 00:04:27.037 EAL: Detected lcore 82 as core 10 on socket 0 00:04:27.037 EAL: Detected lcore 83 as core 11 on socket 0 00:04:27.037 EAL: Detected lcore 84 as core 12 on socket 0 00:04:27.037 EAL: Detected lcore 85 as core 13 on socket 0 00:04:27.037 EAL: Detected lcore 86 as core 14 on socket 0 00:04:27.037 EAL: Detected lcore 87 as core 15 on socket 0 00:04:27.037 EAL: Detected lcore 88 as core 16 on socket 0 00:04:27.298 EAL: Detected lcore 89 as core 17 on socket 0 00:04:27.298 EAL: Detected lcore 90 as core 18 on socket 0 00:04:27.298 EAL: Detected lcore 91 as core 19 on socket 0 00:04:27.298 EAL: Detected lcore 92 as core 20 on socket 0 00:04:27.298 EAL: Detected lcore 93 as core 21 on socket 0 00:04:27.298 EAL: Detected lcore 94 as core 22 on socket 0 00:04:27.298 EAL: Detected lcore 95 as core 23 on socket 0 00:04:27.298 EAL: Detected lcore 96 as core 24 on socket 0 00:04:27.298 EAL: Detected lcore 97 as core 25 on socket 0 00:04:27.298 EAL: Detected lcore 98 as core 26 on socket 0 00:04:27.298 EAL: Detected lcore 99 as core 27 on socket 0 00:04:27.298 EAL: Detected lcore 100 as core 28 on socket 0 00:04:27.298 EAL: Detected lcore 101 as core 29 on socket 0 00:04:27.298 EAL: Detected lcore 102 as core 30 on socket 0 00:04:27.298 EAL: Detected lcore 103 as core 31 on socket 0 00:04:27.298 EAL: Detected lcore 104 as core 32 on socket 0 00:04:27.298 EAL: Detected lcore 105 as core 33 on socket 0 00:04:27.298 EAL: Detected lcore 106 as core 34 on socket 0 00:04:27.298 EAL: Detected lcore 107 as core 35 on socket 0 00:04:27.298 EAL: Detected lcore 108 as core 0 on socket 1 00:04:27.298 EAL: Detected lcore 109 as core 1 on socket 1 00:04:27.298 EAL: Detected lcore 110 as core 2 on socket 1 00:04:27.298 EAL: Detected lcore 111 as core 3 on socket 1 00:04:27.298 EAL: Detected lcore 112 as core 4 on socket 1 00:04:27.298 EAL: Detected lcore 113 as core 5 on socket 1 00:04:27.298 EAL: Detected lcore 114 as core 6 on socket 1 00:04:27.298 EAL: Detected lcore 115 as core 7 on socket 1 00:04:27.298 EAL: Detected lcore 116 as core 8 on socket 1 00:04:27.298 EAL: Detected lcore 117 as core 9 on socket 1 00:04:27.298 EAL: Detected lcore 118 as core 10 on socket 1 00:04:27.298 EAL: Detected lcore 119 as core 11 on socket 1 00:04:27.298 EAL: Detected lcore 120 as core 12 on socket 1 00:04:27.298 EAL: Detected lcore 121 as core 13 on socket 1 00:04:27.298 EAL: Detected lcore 122 as core 14 on socket 1 00:04:27.298 EAL: Detected lcore 123 as core 15 on socket 1 00:04:27.298 EAL: Detected lcore 124 as core 16 on socket 1 00:04:27.298 EAL: Detected lcore 125 as core 17 on socket 1 00:04:27.298 EAL: Detected lcore 126 as core 18 on socket 1 00:04:27.298 EAL: Detected lcore 127 as core 19 on socket 1 00:04:27.298 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:27.298 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:27.298 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:27.298 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:27.298 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:27.298 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:27.298 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:27.298 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:27.298 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:27.298 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:27.298 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:27.298 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:27.298 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:27.298 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:27.298 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:27.298 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:27.298 EAL: Maximum logical cores by configuration: 128 00:04:27.299 EAL: Detected CPU lcores: 128 00:04:27.299 EAL: Detected NUMA nodes: 2 00:04:27.299 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:04:27.299 EAL: Detected shared linkage of DPDK 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:04:27.299 EAL: Registered [vdev] bus. 00:04:27.299 EAL: bus.vdev log level changed from disabled to notice 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:04:27.299 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:27.299 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:04:27.299 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:04:27.299 EAL: No shared files mode enabled, IPC will be disabled 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Bus pci wants IOVA as 'DC' 00:04:27.299 EAL: Bus vdev wants IOVA as 'DC' 00:04:27.299 EAL: Buses did not request a specific IOVA mode. 00:04:27.299 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:27.299 EAL: Selected IOVA mode 'VA' 00:04:27.299 EAL: Probing VFIO support... 00:04:27.299 EAL: IOMMU type 1 (Type 1) is supported 00:04:27.299 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:27.299 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:27.299 EAL: VFIO support initialized 00:04:27.299 EAL: Ask a virtual area of 0x2e000 bytes 00:04:27.299 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:27.299 EAL: Setting up physically contiguous memory... 00:04:27.299 EAL: Setting maximum number of open files to 524288 00:04:27.299 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:27.299 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:27.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:27.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:27.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.299 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:27.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.299 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:27.299 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:27.299 EAL: Hugepages will be freed exactly as allocated. 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: TSC frequency is ~2400000 KHz 00:04:27.299 EAL: Main lcore 0 is ready (tid=7f2bcbae6a00;cpuset=[0]) 00:04:27.299 EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 0 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was expanded by 2MB 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Mem event callback 'spdk:(nil)' registered 00:04:27.299 00:04:27.299 00:04:27.299 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.299 http://cunit.sourceforge.net/ 00:04:27.299 00:04:27.299 00:04:27.299 Suite: components_suite 00:04:27.299 Test: vtophys_malloc_test ...passed 00:04:27.299 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 4 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was expanded by 4MB 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was shrunk by 4MB 00:04:27.299 EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 4 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was expanded by 6MB 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was shrunk by 6MB 00:04:27.299 EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 4 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was expanded by 10MB 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was shrunk by 10MB 00:04:27.299 EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 4 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was expanded by 18MB 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was shrunk by 18MB 00:04:27.299 EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 4 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was expanded by 34MB 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was shrunk by 34MB 00:04:27.299 EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 4 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was expanded by 66MB 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.299 EAL: Heap on socket 0 was shrunk by 66MB 00:04:27.299 EAL: Trying to obtain current memory policy. 00:04:27.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.299 EAL: Restoring previous memory policy: 4 00:04:27.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.299 EAL: request: mp_malloc_sync 00:04:27.299 EAL: No shared files mode enabled, IPC is disabled 00:04:27.300 EAL: Heap on socket 0 was expanded by 130MB 00:04:27.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.300 EAL: request: mp_malloc_sync 00:04:27.300 EAL: No shared files mode enabled, IPC is disabled 00:04:27.300 EAL: Heap on socket 0 was shrunk by 130MB 00:04:27.300 EAL: Trying to obtain current memory policy. 00:04:27.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.300 EAL: Restoring previous memory policy: 4 00:04:27.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.300 EAL: request: mp_malloc_sync 00:04:27.300 EAL: No shared files mode enabled, IPC is disabled 00:04:27.300 EAL: Heap on socket 0 was expanded by 258MB 00:04:27.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.560 EAL: request: mp_malloc_sync 00:04:27.560 EAL: No shared files mode enabled, IPC is disabled 00:04:27.560 EAL: Heap on socket 0 was shrunk by 258MB 00:04:27.560 EAL: Trying to obtain current memory policy. 00:04:27.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.560 EAL: Restoring previous memory policy: 4 00:04:27.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.560 EAL: request: mp_malloc_sync 00:04:27.560 EAL: No shared files mode enabled, IPC is disabled 00:04:27.560 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.560 EAL: request: mp_malloc_sync 00:04:27.560 EAL: No shared files mode enabled, IPC is disabled 00:04:27.560 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.560 EAL: Trying to obtain current memory policy. 00:04:27.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.821 EAL: Restoring previous memory policy: 4 00:04:27.821 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.821 EAL: request: mp_malloc_sync 00:04:27.821 EAL: No shared files mode enabled, IPC is disabled 00:04:27.821 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.821 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.081 EAL: request: mp_malloc_sync 00:04:28.081 EAL: No shared files mode enabled, IPC is disabled 00:04:28.081 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:28.081 passed 00:04:28.081 00:04:28.081 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.081 suites 1 1 n/a 0 0 00:04:28.081 tests 2 2 2 0 0 00:04:28.081 asserts 497 497 497 0 n/a 00:04:28.081 00:04:28.081 Elapsed time = 0.686 seconds 00:04:28.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.081 EAL: request: mp_malloc_sync 00:04:28.081 EAL: No shared files mode enabled, IPC is disabled 00:04:28.082 EAL: Heap on socket 0 was shrunk by 2MB 00:04:28.082 EAL: No shared files mode enabled, IPC is disabled 00:04:28.082 EAL: No shared files mode enabled, IPC is disabled 00:04:28.082 EAL: No shared files mode enabled, IPC is disabled 00:04:28.082 00:04:28.082 real 0m0.827s 00:04:28.082 user 0m0.426s 00:04:28.082 sys 0m0.371s 00:04:28.082 15:22:07 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.082 15:22:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:28.082 ************************************ 00:04:28.082 END TEST env_vtophys 00:04:28.082 ************************************ 00:04:28.082 15:22:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:28.082 15:22:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.082 15:22:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.082 15:22:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.082 ************************************ 00:04:28.082 START TEST env_pci 00:04:28.082 ************************************ 00:04:28.082 15:22:07 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:28.082 00:04:28.082 00:04:28.082 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.082 http://cunit.sourceforge.net/ 00:04:28.082 00:04:28.082 00:04:28.082 Suite: pci 00:04:28.082 Test: pci_hook ...[2024-10-01 15:22:07.390512] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2876792 has claimed it 00:04:28.082 EAL: Cannot find device (10000:00:01.0) 00:04:28.082 EAL: Failed to attach device on primary process 00:04:28.082 passed 00:04:28.082 00:04:28.082 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.082 suites 1 1 n/a 0 0 00:04:28.082 tests 1 1 1 0 0 00:04:28.082 asserts 25 25 25 0 n/a 00:04:28.082 00:04:28.082 Elapsed time = 0.031 seconds 00:04:28.082 00:04:28.082 real 0m0.050s 00:04:28.082 user 0m0.011s 00:04:28.082 sys 0m0.039s 00:04:28.082 15:22:07 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.082 15:22:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:28.082 ************************************ 00:04:28.082 END TEST env_pci 00:04:28.082 ************************************ 00:04:28.082 15:22:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:28.082 15:22:07 env -- env/env.sh@15 -- # uname 00:04:28.082 15:22:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:28.082 15:22:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:28.082 15:22:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.082 15:22:07 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:28.082 15:22:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.082 15:22:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.082 ************************************ 00:04:28.082 START TEST env_dpdk_post_init 00:04:28.082 ************************************ 00:04:28.082 15:22:07 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.342 EAL: Detected CPU lcores: 128 00:04:28.342 EAL: Detected NUMA nodes: 2 00:04:28.342 EAL: Detected shared linkage of DPDK 00:04:28.342 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.342 EAL: Selected IOVA mode 'VA' 00:04:28.342 EAL: VFIO support initialized 00:04:28.342 EAL: Using IOMMU type 1 (Type 1) 00:04:32.547 Starting DPDK initialization... 00:04:32.547 Starting SPDK post initialization... 00:04:32.547 SPDK NVMe probe 00:04:32.547 Attaching to 0000:65:00.0 00:04:32.547 Attached to 0000:65:00.0 00:04:32.547 Cleaning up... 00:04:33.933 00:04:33.933 real 0m5.735s 00:04:33.933 user 0m0.188s 00:04:33.933 sys 0m0.101s 00:04:33.933 15:22:13 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.933 15:22:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.933 ************************************ 00:04:33.933 END TEST env_dpdk_post_init 00:04:33.933 ************************************ 00:04:33.933 15:22:13 env -- env/env.sh@26 -- # uname 00:04:33.933 15:22:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.933 15:22:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.933 15:22:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.933 15:22:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.933 15:22:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.933 ************************************ 00:04:33.933 START TEST env_mem_callbacks 00:04:33.933 ************************************ 00:04:33.933 15:22:13 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.933 EAL: Detected CPU lcores: 128 00:04:33.933 EAL: Detected NUMA nodes: 2 00:04:33.933 EAL: Detected shared linkage of DPDK 00:04:33.933 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.195 EAL: Selected IOVA mode 'VA' 00:04:34.195 EAL: VFIO support initialized 00:04:34.195 00:04:34.195 00:04:34.195 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.195 http://cunit.sourceforge.net/ 00:04:34.195 00:04:34.195 00:04:34.195 Suite: memory 00:04:34.195 Test: test ... 00:04:34.195 register 0x200000200000 2097152 00:04:34.195 malloc 3145728 00:04:34.195 register 0x200000400000 4194304 00:04:34.195 buf 0x200000500000 len 3145728 PASSED 00:04:34.195 malloc 64 00:04:34.195 buf 0x2000004fff40 len 64 PASSED 00:04:34.195 malloc 4194304 00:04:34.195 register 0x200000800000 6291456 00:04:34.195 buf 0x200000a00000 len 4194304 PASSED 00:04:34.195 free 0x200000500000 3145728 00:04:34.195 free 0x2000004fff40 64 00:04:34.195 unregister 0x200000400000 4194304 PASSED 00:04:34.195 free 0x200000a00000 4194304 00:04:34.195 unregister 0x200000800000 6291456 PASSED 00:04:34.195 malloc 8388608 00:04:34.195 register 0x200000400000 10485760 00:04:34.195 buf 0x200000600000 len 8388608 PASSED 00:04:34.195 free 0x200000600000 8388608 00:04:34.195 unregister 0x200000400000 10485760 PASSED 00:04:34.195 passed 00:04:34.195 00:04:34.195 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.195 suites 1 1 n/a 0 0 00:04:34.195 tests 1 1 1 0 0 00:04:34.195 asserts 15 15 15 0 n/a 00:04:34.195 00:04:34.195 Elapsed time = 0.010 seconds 00:04:34.195 00:04:34.195 real 0m0.077s 00:04:34.195 user 0m0.025s 00:04:34.195 sys 0m0.051s 00:04:34.195 15:22:13 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.195 15:22:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:34.195 ************************************ 00:04:34.195 END TEST env_mem_callbacks 00:04:34.195 ************************************ 00:04:34.195 00:04:34.195 real 0m7.512s 00:04:34.195 user 0m1.106s 00:04:34.195 sys 0m0.964s 00:04:34.195 15:22:13 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.195 15:22:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.195 ************************************ 00:04:34.195 END TEST env 00:04:34.195 ************************************ 00:04:34.195 15:22:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.195 15:22:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.195 15:22:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.195 15:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.195 ************************************ 00:04:34.195 START TEST rpc 00:04:34.195 ************************************ 00:04:34.195 15:22:13 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.195 * Looking for test storage... 00:04:34.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.195 15:22:13 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:34.195 15:22:13 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:34.195 15:22:13 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.457 15:22:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.457 15:22:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.457 15:22:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.457 15:22:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.457 15:22:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.457 15:22:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.457 15:22:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.457 15:22:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.457 15:22:13 rpc -- scripts/common.sh@345 -- # : 1 00:04:34.457 15:22:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.457 15:22:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.457 15:22:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.457 15:22:13 rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.457 15:22:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.457 15:22:13 rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.457 15:22:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.457 15:22:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.457 15:22:13 rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.457 15:22:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.457 15:22:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.457 15:22:13 rpc -- scripts/common.sh@368 -- # return 0 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:34.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.457 --rc genhtml_branch_coverage=1 00:04:34.457 --rc genhtml_function_coverage=1 00:04:34.457 --rc genhtml_legend=1 00:04:34.457 --rc geninfo_all_blocks=1 00:04:34.457 --rc geninfo_unexecuted_blocks=1 00:04:34.457 00:04:34.457 ' 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:34.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.457 --rc genhtml_branch_coverage=1 00:04:34.457 --rc genhtml_function_coverage=1 00:04:34.457 --rc genhtml_legend=1 00:04:34.457 --rc geninfo_all_blocks=1 00:04:34.457 --rc geninfo_unexecuted_blocks=1 00:04:34.457 00:04:34.457 ' 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:34.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.457 --rc genhtml_branch_coverage=1 00:04:34.457 --rc genhtml_function_coverage=1 00:04:34.457 --rc genhtml_legend=1 00:04:34.457 --rc geninfo_all_blocks=1 00:04:34.457 --rc geninfo_unexecuted_blocks=1 00:04:34.457 00:04:34.457 ' 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:34.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.457 --rc genhtml_branch_coverage=1 00:04:34.457 --rc genhtml_function_coverage=1 00:04:34.457 --rc genhtml_legend=1 00:04:34.457 --rc geninfo_all_blocks=1 00:04:34.457 --rc geninfo_unexecuted_blocks=1 00:04:34.457 00:04:34.457 ' 00:04:34.457 15:22:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2878051 00:04:34.457 15:22:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.457 15:22:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:34.457 15:22:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2878051 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@831 -- # '[' -z 2878051 ']' 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.457 15:22:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.457 [2024-10-01 15:22:13.805325] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:34.457 [2024-10-01 15:22:13.805395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878051 ] 00:04:34.457 [2024-10-01 15:22:13.839992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:34.457 [2024-10-01 15:22:13.889913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.718 [2024-10-01 15:22:13.936850] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.718 [2024-10-01 15:22:13.936909] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2878051' to capture a snapshot of events at runtime. 00:04:34.718 [2024-10-01 15:22:13.936918] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.718 [2024-10-01 15:22:13.936926] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.718 [2024-10-01 15:22:13.936932] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2878051 for offline analysis/debug. 00:04:34.718 [2024-10-01 15:22:13.936956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.290 15:22:14 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.290 15:22:14 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.290 15:22:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.290 15:22:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.290 15:22:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.290 15:22:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.290 15:22:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.290 15:22:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.290 15:22:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.290 ************************************ 00:04:35.290 START TEST rpc_integrity 00:04:35.290 ************************************ 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.290 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.290 { 00:04:35.290 "name": "Malloc0", 00:04:35.290 "aliases": [ 00:04:35.290 "6e14f37a-54ab-418c-8e2f-4b83ea9102ba" 00:04:35.290 ], 00:04:35.290 "product_name": "Malloc disk", 00:04:35.290 "block_size": 512, 00:04:35.290 "num_blocks": 16384, 00:04:35.290 "uuid": "6e14f37a-54ab-418c-8e2f-4b83ea9102ba", 00:04:35.290 "assigned_rate_limits": { 00:04:35.290 "rw_ios_per_sec": 0, 00:04:35.290 "rw_mbytes_per_sec": 0, 00:04:35.290 "r_mbytes_per_sec": 0, 00:04:35.290 "w_mbytes_per_sec": 0 00:04:35.290 }, 00:04:35.290 "claimed": false, 00:04:35.290 "zoned": false, 00:04:35.290 "supported_io_types": { 00:04:35.290 "read": true, 00:04:35.290 "write": true, 00:04:35.290 "unmap": true, 00:04:35.290 "flush": true, 00:04:35.290 "reset": true, 00:04:35.290 "nvme_admin": false, 00:04:35.290 "nvme_io": false, 00:04:35.290 "nvme_io_md": false, 00:04:35.290 "write_zeroes": true, 00:04:35.290 "zcopy": true, 00:04:35.290 "get_zone_info": false, 00:04:35.290 "zone_management": false, 00:04:35.290 "zone_append": false, 00:04:35.290 "compare": false, 00:04:35.290 "compare_and_write": false, 00:04:35.290 "abort": true, 00:04:35.290 "seek_hole": false, 00:04:35.290 "seek_data": false, 00:04:35.290 "copy": true, 00:04:35.290 "nvme_iov_md": false 00:04:35.290 }, 00:04:35.290 "memory_domains": [ 00:04:35.290 { 00:04:35.290 "dma_device_id": "system", 00:04:35.290 "dma_device_type": 1 00:04:35.290 }, 00:04:35.290 { 00:04:35.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.290 "dma_device_type": 2 00:04:35.290 } 00:04:35.290 ], 00:04:35.290 "driver_specific": {} 00:04:35.290 } 00:04:35.290 ]' 00:04:35.290 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.551 [2024-10-01 15:22:14.780313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.551 [2024-10-01 15:22:14.780362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.551 [2024-10-01 15:22:14.780378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d13ec0 00:04:35.551 [2024-10-01 15:22:14.780386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.551 [2024-10-01 15:22:14.781945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.551 [2024-10-01 15:22:14.781983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.551 Passthru0 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.551 { 00:04:35.551 "name": "Malloc0", 00:04:35.551 "aliases": [ 00:04:35.551 "6e14f37a-54ab-418c-8e2f-4b83ea9102ba" 00:04:35.551 ], 00:04:35.551 "product_name": "Malloc disk", 00:04:35.551 "block_size": 512, 00:04:35.551 "num_blocks": 16384, 00:04:35.551 "uuid": "6e14f37a-54ab-418c-8e2f-4b83ea9102ba", 00:04:35.551 "assigned_rate_limits": { 00:04:35.551 "rw_ios_per_sec": 0, 00:04:35.551 "rw_mbytes_per_sec": 0, 00:04:35.551 "r_mbytes_per_sec": 0, 00:04:35.551 "w_mbytes_per_sec": 0 00:04:35.551 }, 00:04:35.551 "claimed": true, 00:04:35.551 "claim_type": "exclusive_write", 00:04:35.551 "zoned": false, 00:04:35.551 "supported_io_types": { 00:04:35.551 "read": true, 00:04:35.551 "write": true, 00:04:35.551 "unmap": true, 00:04:35.551 "flush": true, 00:04:35.551 "reset": true, 00:04:35.551 "nvme_admin": false, 00:04:35.551 "nvme_io": false, 00:04:35.551 "nvme_io_md": false, 00:04:35.551 "write_zeroes": true, 00:04:35.551 "zcopy": true, 00:04:35.551 "get_zone_info": false, 00:04:35.551 "zone_management": false, 00:04:35.551 "zone_append": false, 00:04:35.551 "compare": false, 00:04:35.551 "compare_and_write": false, 00:04:35.551 "abort": true, 00:04:35.551 "seek_hole": false, 00:04:35.551 "seek_data": false, 00:04:35.551 "copy": true, 00:04:35.551 "nvme_iov_md": false 00:04:35.551 }, 00:04:35.551 "memory_domains": [ 00:04:35.551 { 00:04:35.551 "dma_device_id": "system", 00:04:35.551 "dma_device_type": 1 00:04:35.551 }, 00:04:35.551 { 00:04:35.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.551 "dma_device_type": 2 00:04:35.551 } 00:04:35.551 ], 00:04:35.551 "driver_specific": {} 00:04:35.551 }, 00:04:35.551 { 00:04:35.551 "name": "Passthru0", 00:04:35.551 "aliases": [ 00:04:35.551 "20832082-61cc-530b-8998-063fed44cb9f" 00:04:35.551 ], 00:04:35.551 "product_name": "passthru", 00:04:35.551 "block_size": 512, 00:04:35.551 "num_blocks": 16384, 00:04:35.551 "uuid": "20832082-61cc-530b-8998-063fed44cb9f", 00:04:35.551 "assigned_rate_limits": { 00:04:35.551 "rw_ios_per_sec": 0, 00:04:35.551 "rw_mbytes_per_sec": 0, 00:04:35.551 "r_mbytes_per_sec": 0, 00:04:35.551 "w_mbytes_per_sec": 0 00:04:35.551 }, 00:04:35.551 "claimed": false, 00:04:35.551 "zoned": false, 00:04:35.551 "supported_io_types": { 00:04:35.551 "read": true, 00:04:35.551 "write": true, 00:04:35.551 "unmap": true, 00:04:35.551 "flush": true, 00:04:35.551 "reset": true, 00:04:35.551 "nvme_admin": false, 00:04:35.551 "nvme_io": false, 00:04:35.551 "nvme_io_md": false, 00:04:35.551 "write_zeroes": true, 00:04:35.551 "zcopy": true, 00:04:35.551 "get_zone_info": false, 00:04:35.551 "zone_management": false, 00:04:35.551 "zone_append": false, 00:04:35.551 "compare": false, 00:04:35.551 "compare_and_write": false, 00:04:35.551 "abort": true, 00:04:35.551 "seek_hole": false, 00:04:35.551 "seek_data": false, 00:04:35.551 "copy": true, 00:04:35.551 "nvme_iov_md": false 00:04:35.551 }, 00:04:35.551 "memory_domains": [ 00:04:35.551 { 00:04:35.551 "dma_device_id": "system", 00:04:35.551 "dma_device_type": 1 00:04:35.551 }, 00:04:35.551 { 00:04:35.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.551 "dma_device_type": 2 00:04:35.551 } 00:04:35.551 ], 00:04:35.551 "driver_specific": { 00:04:35.551 "passthru": { 00:04:35.551 "name": "Passthru0", 00:04:35.551 "base_bdev_name": "Malloc0" 00:04:35.551 } 00:04:35.551 } 00:04:35.551 } 00:04:35.551 ]' 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.551 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.551 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.552 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.552 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.552 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.552 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.552 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.552 15:22:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.552 00:04:35.552 real 0m0.303s 00:04:35.552 user 0m0.189s 00:04:35.552 sys 0m0.043s 00:04:35.552 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.552 15:22:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.552 ************************************ 00:04:35.552 END TEST rpc_integrity 00:04:35.552 ************************************ 00:04:35.552 15:22:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.552 15:22:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.552 15:22:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.552 15:22:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.812 ************************************ 00:04:35.812 START TEST rpc_plugins 00:04:35.812 ************************************ 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.812 { 00:04:35.812 "name": "Malloc1", 00:04:35.812 "aliases": [ 00:04:35.812 "ed59da94-b9c1-4d1e-96d3-859a3a2bfa04" 00:04:35.812 ], 00:04:35.812 "product_name": "Malloc disk", 00:04:35.812 "block_size": 4096, 00:04:35.812 "num_blocks": 256, 00:04:35.812 "uuid": "ed59da94-b9c1-4d1e-96d3-859a3a2bfa04", 00:04:35.812 "assigned_rate_limits": { 00:04:35.812 "rw_ios_per_sec": 0, 00:04:35.812 "rw_mbytes_per_sec": 0, 00:04:35.812 "r_mbytes_per_sec": 0, 00:04:35.812 "w_mbytes_per_sec": 0 00:04:35.812 }, 00:04:35.812 "claimed": false, 00:04:35.812 "zoned": false, 00:04:35.812 "supported_io_types": { 00:04:35.812 "read": true, 00:04:35.812 "write": true, 00:04:35.812 "unmap": true, 00:04:35.812 "flush": true, 00:04:35.812 "reset": true, 00:04:35.812 "nvme_admin": false, 00:04:35.812 "nvme_io": false, 00:04:35.812 "nvme_io_md": false, 00:04:35.812 "write_zeroes": true, 00:04:35.812 "zcopy": true, 00:04:35.812 "get_zone_info": false, 00:04:35.812 "zone_management": false, 00:04:35.812 "zone_append": false, 00:04:35.812 "compare": false, 00:04:35.812 "compare_and_write": false, 00:04:35.812 "abort": true, 00:04:35.812 "seek_hole": false, 00:04:35.812 "seek_data": false, 00:04:35.812 "copy": true, 00:04:35.812 "nvme_iov_md": false 00:04:35.812 }, 00:04:35.812 "memory_domains": [ 00:04:35.812 { 00:04:35.812 "dma_device_id": "system", 00:04:35.812 "dma_device_type": 1 00:04:35.812 }, 00:04:35.812 { 00:04:35.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.812 "dma_device_type": 2 00:04:35.812 } 00:04:35.812 ], 00:04:35.812 "driver_specific": {} 00:04:35.812 } 00:04:35.812 ]' 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.812 15:22:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.812 00:04:35.812 real 0m0.152s 00:04:35.812 user 0m0.095s 00:04:35.812 sys 0m0.019s 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.812 15:22:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.812 ************************************ 00:04:35.812 END TEST rpc_plugins 00:04:35.812 ************************************ 00:04:35.812 15:22:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.812 15:22:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.812 15:22:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.812 15:22:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.812 ************************************ 00:04:35.812 START TEST rpc_trace_cmd_test 00:04:35.812 ************************************ 00:04:35.812 15:22:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:35.812 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.812 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.812 15:22:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.812 15:22:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:36.073 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2878051", 00:04:36.073 "tpoint_group_mask": "0x8", 00:04:36.073 "iscsi_conn": { 00:04:36.073 "mask": "0x2", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "scsi": { 00:04:36.073 "mask": "0x4", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "bdev": { 00:04:36.073 "mask": "0x8", 00:04:36.073 "tpoint_mask": "0xffffffffffffffff" 00:04:36.073 }, 00:04:36.073 "nvmf_rdma": { 00:04:36.073 "mask": "0x10", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "nvmf_tcp": { 00:04:36.073 "mask": "0x20", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "ftl": { 00:04:36.073 "mask": "0x40", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "blobfs": { 00:04:36.073 "mask": "0x80", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "dsa": { 00:04:36.073 "mask": "0x200", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "thread": { 00:04:36.073 "mask": "0x400", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "nvme_pcie": { 00:04:36.073 "mask": "0x800", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "iaa": { 00:04:36.073 "mask": "0x1000", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "nvme_tcp": { 00:04:36.073 "mask": "0x2000", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "bdev_nvme": { 00:04:36.073 "mask": "0x4000", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "sock": { 00:04:36.073 "mask": "0x8000", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "blob": { 00:04:36.073 "mask": "0x10000", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 }, 00:04:36.073 "bdev_raid": { 00:04:36.073 "mask": "0x20000", 00:04:36.073 "tpoint_mask": "0x0" 00:04:36.073 } 00:04:36.073 }' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:36.073 00:04:36.073 real 0m0.236s 00:04:36.073 user 0m0.196s 00:04:36.073 sys 0m0.028s 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.073 15:22:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.073 ************************************ 00:04:36.073 END TEST rpc_trace_cmd_test 00:04:36.073 ************************************ 00:04:36.335 15:22:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.335 15:22:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.335 15:22:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.335 15:22:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.335 15:22:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.335 15:22:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 ************************************ 00:04:36.335 START TEST rpc_daemon_integrity 00:04:36.335 ************************************ 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.335 { 00:04:36.335 "name": "Malloc2", 00:04:36.335 "aliases": [ 00:04:36.335 "75a0abd1-dae6-417b-9f81-0ba19ceb547f" 00:04:36.335 ], 00:04:36.335 "product_name": "Malloc disk", 00:04:36.335 "block_size": 512, 00:04:36.335 "num_blocks": 16384, 00:04:36.335 "uuid": "75a0abd1-dae6-417b-9f81-0ba19ceb547f", 00:04:36.335 "assigned_rate_limits": { 00:04:36.335 "rw_ios_per_sec": 0, 00:04:36.335 "rw_mbytes_per_sec": 0, 00:04:36.335 "r_mbytes_per_sec": 0, 00:04:36.335 "w_mbytes_per_sec": 0 00:04:36.335 }, 00:04:36.335 "claimed": false, 00:04:36.335 "zoned": false, 00:04:36.335 "supported_io_types": { 00:04:36.335 "read": true, 00:04:36.335 "write": true, 00:04:36.335 "unmap": true, 00:04:36.335 "flush": true, 00:04:36.335 "reset": true, 00:04:36.335 "nvme_admin": false, 00:04:36.335 "nvme_io": false, 00:04:36.335 "nvme_io_md": false, 00:04:36.335 "write_zeroes": true, 00:04:36.335 "zcopy": true, 00:04:36.335 "get_zone_info": false, 00:04:36.335 "zone_management": false, 00:04:36.335 "zone_append": false, 00:04:36.335 "compare": false, 00:04:36.335 "compare_and_write": false, 00:04:36.335 "abort": true, 00:04:36.335 "seek_hole": false, 00:04:36.335 "seek_data": false, 00:04:36.335 "copy": true, 00:04:36.335 "nvme_iov_md": false 00:04:36.335 }, 00:04:36.335 "memory_domains": [ 00:04:36.335 { 00:04:36.335 "dma_device_id": "system", 00:04:36.335 "dma_device_type": 1 00:04:36.335 }, 00:04:36.335 { 00:04:36.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.335 "dma_device_type": 2 00:04:36.335 } 00:04:36.335 ], 00:04:36.335 "driver_specific": {} 00:04:36.335 } 00:04:36.335 ]' 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 [2024-10-01 15:22:15.718857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.335 [2024-10-01 15:22:15.718908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.335 [2024-10-01 15:22:15.718928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d152a0 00:04:36.335 [2024-10-01 15:22:15.718937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.335 [2024-10-01 15:22:15.720392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.335 [2024-10-01 15:22:15.720429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.335 Passthru0 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.335 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.335 { 00:04:36.335 "name": "Malloc2", 00:04:36.335 "aliases": [ 00:04:36.335 "75a0abd1-dae6-417b-9f81-0ba19ceb547f" 00:04:36.335 ], 00:04:36.335 "product_name": "Malloc disk", 00:04:36.335 "block_size": 512, 00:04:36.335 "num_blocks": 16384, 00:04:36.335 "uuid": "75a0abd1-dae6-417b-9f81-0ba19ceb547f", 00:04:36.335 "assigned_rate_limits": { 00:04:36.335 "rw_ios_per_sec": 0, 00:04:36.335 "rw_mbytes_per_sec": 0, 00:04:36.335 "r_mbytes_per_sec": 0, 00:04:36.335 "w_mbytes_per_sec": 0 00:04:36.335 }, 00:04:36.335 "claimed": true, 00:04:36.335 "claim_type": "exclusive_write", 00:04:36.335 "zoned": false, 00:04:36.335 "supported_io_types": { 00:04:36.335 "read": true, 00:04:36.335 "write": true, 00:04:36.335 "unmap": true, 00:04:36.335 "flush": true, 00:04:36.335 "reset": true, 00:04:36.335 "nvme_admin": false, 00:04:36.335 "nvme_io": false, 00:04:36.335 "nvme_io_md": false, 00:04:36.335 "write_zeroes": true, 00:04:36.335 "zcopy": true, 00:04:36.335 "get_zone_info": false, 00:04:36.335 "zone_management": false, 00:04:36.335 "zone_append": false, 00:04:36.335 "compare": false, 00:04:36.335 "compare_and_write": false, 00:04:36.335 "abort": true, 00:04:36.335 "seek_hole": false, 00:04:36.335 "seek_data": false, 00:04:36.335 "copy": true, 00:04:36.335 "nvme_iov_md": false 00:04:36.335 }, 00:04:36.335 "memory_domains": [ 00:04:36.335 { 00:04:36.335 "dma_device_id": "system", 00:04:36.335 "dma_device_type": 1 00:04:36.335 }, 00:04:36.335 { 00:04:36.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.335 "dma_device_type": 2 00:04:36.335 } 00:04:36.335 ], 00:04:36.335 "driver_specific": {} 00:04:36.335 }, 00:04:36.335 { 00:04:36.335 "name": "Passthru0", 00:04:36.335 "aliases": [ 00:04:36.335 "d49e20f5-9938-5644-bdfa-a74a71f87272" 00:04:36.335 ], 00:04:36.335 "product_name": "passthru", 00:04:36.335 "block_size": 512, 00:04:36.335 "num_blocks": 16384, 00:04:36.335 "uuid": "d49e20f5-9938-5644-bdfa-a74a71f87272", 00:04:36.335 "assigned_rate_limits": { 00:04:36.335 "rw_ios_per_sec": 0, 00:04:36.335 "rw_mbytes_per_sec": 0, 00:04:36.335 "r_mbytes_per_sec": 0, 00:04:36.335 "w_mbytes_per_sec": 0 00:04:36.335 }, 00:04:36.335 "claimed": false, 00:04:36.335 "zoned": false, 00:04:36.335 "supported_io_types": { 00:04:36.335 "read": true, 00:04:36.335 "write": true, 00:04:36.335 "unmap": true, 00:04:36.335 "flush": true, 00:04:36.335 "reset": true, 00:04:36.335 "nvme_admin": false, 00:04:36.335 "nvme_io": false, 00:04:36.335 "nvme_io_md": false, 00:04:36.335 "write_zeroes": true, 00:04:36.336 "zcopy": true, 00:04:36.336 "get_zone_info": false, 00:04:36.336 "zone_management": false, 00:04:36.336 "zone_append": false, 00:04:36.336 "compare": false, 00:04:36.336 "compare_and_write": false, 00:04:36.336 "abort": true, 00:04:36.336 "seek_hole": false, 00:04:36.336 "seek_data": false, 00:04:36.336 "copy": true, 00:04:36.336 "nvme_iov_md": false 00:04:36.336 }, 00:04:36.336 "memory_domains": [ 00:04:36.336 { 00:04:36.336 "dma_device_id": "system", 00:04:36.336 "dma_device_type": 1 00:04:36.336 }, 00:04:36.336 { 00:04:36.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.336 "dma_device_type": 2 00:04:36.336 } 00:04:36.336 ], 00:04:36.336 "driver_specific": { 00:04:36.336 "passthru": { 00:04:36.336 "name": "Passthru0", 00:04:36.336 "base_bdev_name": "Malloc2" 00:04:36.336 } 00:04:36.336 } 00:04:36.336 } 00:04:36.336 ]' 00:04:36.336 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.598 00:04:36.598 real 0m0.303s 00:04:36.598 user 0m0.182s 00:04:36.598 sys 0m0.052s 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.598 15:22:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.598 ************************************ 00:04:36.598 END TEST rpc_daemon_integrity 00:04:36.598 ************************************ 00:04:36.598 15:22:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.598 15:22:15 rpc -- rpc/rpc.sh@84 -- # killprocess 2878051 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@950 -- # '[' -z 2878051 ']' 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@954 -- # kill -0 2878051 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2878051 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2878051' 00:04:36.598 killing process with pid 2878051 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@969 -- # kill 2878051 00:04:36.598 15:22:15 rpc -- common/autotest_common.sh@974 -- # wait 2878051 00:04:36.858 00:04:36.858 real 0m2.687s 00:04:36.858 user 0m3.414s 00:04:36.858 sys 0m0.813s 00:04:36.858 15:22:16 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.858 15:22:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.858 ************************************ 00:04:36.858 END TEST rpc 00:04:36.858 ************************************ 00:04:36.858 15:22:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.858 15:22:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.858 15:22:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.858 15:22:16 -- common/autotest_common.sh@10 -- # set +x 00:04:36.858 ************************************ 00:04:36.858 START TEST skip_rpc 00:04:36.858 ************************************ 00:04:36.858 15:22:16 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.118 * Looking for test storage... 00:04:37.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.118 15:22:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.118 --rc genhtml_branch_coverage=1 00:04:37.118 --rc genhtml_function_coverage=1 00:04:37.118 --rc genhtml_legend=1 00:04:37.118 --rc geninfo_all_blocks=1 00:04:37.118 --rc geninfo_unexecuted_blocks=1 00:04:37.118 00:04:37.118 ' 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.118 --rc genhtml_branch_coverage=1 00:04:37.118 --rc genhtml_function_coverage=1 00:04:37.118 --rc genhtml_legend=1 00:04:37.118 --rc geninfo_all_blocks=1 00:04:37.118 --rc geninfo_unexecuted_blocks=1 00:04:37.118 00:04:37.118 ' 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.118 --rc genhtml_branch_coverage=1 00:04:37.118 --rc genhtml_function_coverage=1 00:04:37.118 --rc genhtml_legend=1 00:04:37.118 --rc geninfo_all_blocks=1 00:04:37.118 --rc geninfo_unexecuted_blocks=1 00:04:37.118 00:04:37.118 ' 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.118 --rc genhtml_branch_coverage=1 00:04:37.118 --rc genhtml_function_coverage=1 00:04:37.118 --rc genhtml_legend=1 00:04:37.118 --rc geninfo_all_blocks=1 00:04:37.118 --rc geninfo_unexecuted_blocks=1 00:04:37.118 00:04:37.118 ' 00:04:37.118 15:22:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.118 15:22:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.118 15:22:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.118 15:22:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.118 ************************************ 00:04:37.118 START TEST skip_rpc 00:04:37.118 ************************************ 00:04:37.118 15:22:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:37.118 15:22:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2878898 00:04:37.118 15:22:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.118 15:22:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:37.118 15:22:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.378 [2024-10-01 15:22:16.613457] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:37.378 [2024-10-01 15:22:16.613520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878898 ] 00:04:37.378 [2024-10-01 15:22:16.648221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:37.378 [2024-10-01 15:22:16.697329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.378 [2024-10-01 15:22:16.743720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2878898 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2878898 ']' 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2878898 00:04:42.663 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2878898 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2878898' 00:04:42.664 killing process with pid 2878898 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2878898 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2878898 00:04:42.664 00:04:42.664 real 0m5.276s 00:04:42.664 user 0m5.029s 00:04:42.664 sys 0m0.295s 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.664 15:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.664 ************************************ 00:04:42.664 END TEST skip_rpc 00:04:42.664 ************************************ 00:04:42.664 15:22:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.664 15:22:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.664 15:22:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.664 15:22:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.664 ************************************ 00:04:42.664 START TEST skip_rpc_with_json 00:04:42.664 ************************************ 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2879941 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2879941 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2879941 ']' 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.664 15:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.664 [2024-10-01 15:22:21.961793] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:42.664 [2024-10-01 15:22:21.961844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879941 ] 00:04:42.664 [2024-10-01 15:22:21.992557] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:42.664 [2024-10-01 15:22:22.037101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.664 [2024-10-01 15:22:22.066081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.606 [2024-10-01 15:22:22.745510] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.606 request: 00:04:43.606 { 00:04:43.606 "trtype": "tcp", 00:04:43.606 "method": "nvmf_get_transports", 00:04:43.606 "req_id": 1 00:04:43.606 } 00:04:43.606 Got JSON-RPC error response 00:04:43.606 response: 00:04:43.606 { 00:04:43.606 "code": -19, 00:04:43.606 "message": "No such device" 00:04:43.606 } 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.606 [2024-10-01 15:22:22.757605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.606 15:22:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.606 { 00:04:43.606 "subsystems": [ 00:04:43.606 { 00:04:43.606 "subsystem": "fsdev", 00:04:43.606 "config": [ 00:04:43.606 { 00:04:43.606 "method": "fsdev_set_opts", 00:04:43.606 "params": { 00:04:43.606 "fsdev_io_pool_size": 65535, 00:04:43.606 "fsdev_io_cache_size": 256 00:04:43.606 } 00:04:43.606 } 00:04:43.606 ] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "vfio_user_target", 00:04:43.606 "config": null 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "keyring", 00:04:43.606 "config": [] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "iobuf", 00:04:43.606 "config": [ 00:04:43.606 { 00:04:43.606 "method": "iobuf_set_options", 00:04:43.606 "params": { 00:04:43.606 "small_pool_count": 8192, 00:04:43.606 "large_pool_count": 1024, 00:04:43.606 "small_bufsize": 8192, 00:04:43.606 "large_bufsize": 135168 00:04:43.606 } 00:04:43.606 } 00:04:43.606 ] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "sock", 00:04:43.606 "config": [ 00:04:43.606 { 00:04:43.606 "method": "sock_set_default_impl", 00:04:43.606 "params": { 00:04:43.606 "impl_name": "posix" 00:04:43.606 } 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "method": "sock_impl_set_options", 00:04:43.606 "params": { 00:04:43.606 "impl_name": "ssl", 00:04:43.606 "recv_buf_size": 4096, 00:04:43.606 "send_buf_size": 4096, 00:04:43.606 "enable_recv_pipe": true, 00:04:43.606 "enable_quickack": false, 00:04:43.606 "enable_placement_id": 0, 00:04:43.606 "enable_zerocopy_send_server": true, 00:04:43.606 "enable_zerocopy_send_client": false, 00:04:43.606 "zerocopy_threshold": 0, 00:04:43.606 "tls_version": 0, 00:04:43.606 "enable_ktls": false 00:04:43.606 } 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "method": "sock_impl_set_options", 00:04:43.606 "params": { 00:04:43.606 "impl_name": "posix", 00:04:43.606 "recv_buf_size": 2097152, 00:04:43.606 "send_buf_size": 2097152, 00:04:43.606 "enable_recv_pipe": true, 00:04:43.606 "enable_quickack": false, 00:04:43.606 "enable_placement_id": 0, 00:04:43.606 "enable_zerocopy_send_server": true, 00:04:43.606 "enable_zerocopy_send_client": false, 00:04:43.606 "zerocopy_threshold": 0, 00:04:43.606 "tls_version": 0, 00:04:43.606 "enable_ktls": false 00:04:43.606 } 00:04:43.606 } 00:04:43.606 ] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "vmd", 00:04:43.606 "config": [] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "accel", 00:04:43.606 "config": [ 00:04:43.606 { 00:04:43.606 "method": "accel_set_options", 00:04:43.606 "params": { 00:04:43.606 "small_cache_size": 128, 00:04:43.606 "large_cache_size": 16, 00:04:43.606 "task_count": 2048, 00:04:43.606 "sequence_count": 2048, 00:04:43.606 "buf_count": 2048 00:04:43.606 } 00:04:43.606 } 00:04:43.606 ] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "bdev", 00:04:43.606 "config": [ 00:04:43.606 { 00:04:43.606 "method": "bdev_set_options", 00:04:43.606 "params": { 00:04:43.606 "bdev_io_pool_size": 65535, 00:04:43.606 "bdev_io_cache_size": 256, 00:04:43.606 "bdev_auto_examine": true, 00:04:43.606 "iobuf_small_cache_size": 128, 00:04:43.606 "iobuf_large_cache_size": 16 00:04:43.606 } 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "method": "bdev_raid_set_options", 00:04:43.606 "params": { 00:04:43.606 "process_window_size_kb": 1024, 00:04:43.606 "process_max_bandwidth_mb_sec": 0 00:04:43.606 } 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "method": "bdev_iscsi_set_options", 00:04:43.606 "params": { 00:04:43.606 "timeout_sec": 30 00:04:43.606 } 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "method": "bdev_nvme_set_options", 00:04:43.606 "params": { 00:04:43.606 "action_on_timeout": "none", 00:04:43.606 "timeout_us": 0, 00:04:43.606 "timeout_admin_us": 0, 00:04:43.606 "keep_alive_timeout_ms": 10000, 00:04:43.606 "arbitration_burst": 0, 00:04:43.606 "low_priority_weight": 0, 00:04:43.606 "medium_priority_weight": 0, 00:04:43.606 "high_priority_weight": 0, 00:04:43.606 "nvme_adminq_poll_period_us": 10000, 00:04:43.606 "nvme_ioq_poll_period_us": 0, 00:04:43.606 "io_queue_requests": 0, 00:04:43.606 "delay_cmd_submit": true, 00:04:43.606 "transport_retry_count": 4, 00:04:43.606 "bdev_retry_count": 3, 00:04:43.606 "transport_ack_timeout": 0, 00:04:43.606 "ctrlr_loss_timeout_sec": 0, 00:04:43.606 "reconnect_delay_sec": 0, 00:04:43.606 "fast_io_fail_timeout_sec": 0, 00:04:43.606 "disable_auto_failback": false, 00:04:43.606 "generate_uuids": false, 00:04:43.606 "transport_tos": 0, 00:04:43.606 "nvme_error_stat": false, 00:04:43.606 "rdma_srq_size": 0, 00:04:43.606 "io_path_stat": false, 00:04:43.606 "allow_accel_sequence": false, 00:04:43.606 "rdma_max_cq_size": 0, 00:04:43.606 "rdma_cm_event_timeout_ms": 0, 00:04:43.606 "dhchap_digests": [ 00:04:43.606 "sha256", 00:04:43.606 "sha384", 00:04:43.606 "sha512" 00:04:43.606 ], 00:04:43.606 "dhchap_dhgroups": [ 00:04:43.606 "null", 00:04:43.606 "ffdhe2048", 00:04:43.606 "ffdhe3072", 00:04:43.606 "ffdhe4096", 00:04:43.606 "ffdhe6144", 00:04:43.606 "ffdhe8192" 00:04:43.606 ] 00:04:43.606 } 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "method": "bdev_nvme_set_hotplug", 00:04:43.606 "params": { 00:04:43.606 "period_us": 100000, 00:04:43.606 "enable": false 00:04:43.606 } 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "method": "bdev_wait_for_examine" 00:04:43.606 } 00:04:43.606 ] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "scsi", 00:04:43.606 "config": null 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "scheduler", 00:04:43.606 "config": [ 00:04:43.606 { 00:04:43.606 "method": "framework_set_scheduler", 00:04:43.606 "params": { 00:04:43.606 "name": "static" 00:04:43.606 } 00:04:43.606 } 00:04:43.606 ] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "vhost_scsi", 00:04:43.606 "config": [] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "vhost_blk", 00:04:43.606 "config": [] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "ublk", 00:04:43.606 "config": [] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "nbd", 00:04:43.606 "config": [] 00:04:43.606 }, 00:04:43.606 { 00:04:43.606 "subsystem": "nvmf", 00:04:43.606 "config": [ 00:04:43.606 { 00:04:43.606 "method": "nvmf_set_config", 00:04:43.606 "params": { 00:04:43.607 "discovery_filter": "match_any", 00:04:43.607 "admin_cmd_passthru": { 00:04:43.607 "identify_ctrlr": false 00:04:43.607 }, 00:04:43.607 "dhchap_digests": [ 00:04:43.607 "sha256", 00:04:43.607 "sha384", 00:04:43.607 "sha512" 00:04:43.607 ], 00:04:43.607 "dhchap_dhgroups": [ 00:04:43.607 "null", 00:04:43.607 "ffdhe2048", 00:04:43.607 "ffdhe3072", 00:04:43.607 "ffdhe4096", 00:04:43.607 "ffdhe6144", 00:04:43.607 "ffdhe8192" 00:04:43.607 ] 00:04:43.607 } 00:04:43.607 }, 00:04:43.607 { 00:04:43.607 "method": "nvmf_set_max_subsystems", 00:04:43.607 "params": { 00:04:43.607 "max_subsystems": 1024 00:04:43.607 } 00:04:43.607 }, 00:04:43.607 { 00:04:43.607 "method": "nvmf_set_crdt", 00:04:43.607 "params": { 00:04:43.607 "crdt1": 0, 00:04:43.607 "crdt2": 0, 00:04:43.607 "crdt3": 0 00:04:43.607 } 00:04:43.607 }, 00:04:43.607 { 00:04:43.607 "method": "nvmf_create_transport", 00:04:43.607 "params": { 00:04:43.607 "trtype": "TCP", 00:04:43.607 "max_queue_depth": 128, 00:04:43.607 "max_io_qpairs_per_ctrlr": 127, 00:04:43.607 "in_capsule_data_size": 4096, 00:04:43.607 "max_io_size": 131072, 00:04:43.607 "io_unit_size": 131072, 00:04:43.607 "max_aq_depth": 128, 00:04:43.607 "num_shared_buffers": 511, 00:04:43.607 "buf_cache_size": 4294967295, 00:04:43.607 "dif_insert_or_strip": false, 00:04:43.607 "zcopy": false, 00:04:43.607 "c2h_success": true, 00:04:43.607 "sock_priority": 0, 00:04:43.607 "abort_timeout_sec": 1, 00:04:43.607 "ack_timeout": 0, 00:04:43.607 "data_wr_pool_size": 0 00:04:43.607 } 00:04:43.607 } 00:04:43.607 ] 00:04:43.607 }, 00:04:43.607 { 00:04:43.607 "subsystem": "iscsi", 00:04:43.607 "config": [ 00:04:43.607 { 00:04:43.607 "method": "iscsi_set_options", 00:04:43.607 "params": { 00:04:43.607 "node_base": "iqn.2016-06.io.spdk", 00:04:43.607 "max_sessions": 128, 00:04:43.607 "max_connections_per_session": 2, 00:04:43.607 "max_queue_depth": 64, 00:04:43.607 "default_time2wait": 2, 00:04:43.607 "default_time2retain": 20, 00:04:43.607 "first_burst_length": 8192, 00:04:43.607 "immediate_data": true, 00:04:43.607 "allow_duplicated_isid": false, 00:04:43.607 "error_recovery_level": 0, 00:04:43.607 "nop_timeout": 60, 00:04:43.607 "nop_in_interval": 30, 00:04:43.607 "disable_chap": false, 00:04:43.607 "require_chap": false, 00:04:43.607 "mutual_chap": false, 00:04:43.607 "chap_group": 0, 00:04:43.607 "max_large_datain_per_connection": 64, 00:04:43.607 "max_r2t_per_connection": 4, 00:04:43.607 "pdu_pool_size": 36864, 00:04:43.607 "immediate_data_pool_size": 16384, 00:04:43.607 "data_out_pool_size": 2048 00:04:43.607 } 00:04:43.607 } 00:04:43.607 ] 00:04:43.607 } 00:04:43.607 ] 00:04:43.607 } 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2879941 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2879941 ']' 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2879941 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2879941 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2879941' 00:04:43.607 killing process with pid 2879941 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2879941 00:04:43.607 15:22:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2879941 00:04:43.867 15:22:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2880284 00:04:43.867 15:22:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:43.867 15:22:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2880284 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2880284 ']' 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2880284 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2880284 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2880284' 00:04:49.153 killing process with pid 2880284 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2880284 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2880284 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.153 00:04:49.153 real 0m6.554s 00:04:49.153 user 0m6.447s 00:04:49.153 sys 0m0.563s 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.153 ************************************ 00:04:49.153 END TEST skip_rpc_with_json 00:04:49.153 ************************************ 00:04:49.153 15:22:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.153 15:22:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.153 15:22:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.153 15:22:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.153 ************************************ 00:04:49.153 START TEST skip_rpc_with_delay 00:04:49.153 ************************************ 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.153 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.153 [2024-10-01 15:22:28.601157] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.153 [2024-10-01 15:22:28.601231] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:49.414 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:49.414 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.414 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.414 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.414 00:04:49.414 real 0m0.077s 00:04:49.414 user 0m0.047s 00:04:49.414 sys 0m0.030s 00:04:49.414 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.414 15:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.414 ************************************ 00:04:49.414 END TEST skip_rpc_with_delay 00:04:49.414 ************************************ 00:04:49.414 15:22:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.414 15:22:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.414 15:22:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.414 15:22:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.414 15:22:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.415 15:22:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.415 ************************************ 00:04:49.415 START TEST exit_on_failed_rpc_init 00:04:49.415 ************************************ 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2881353 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2881353 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2881353 ']' 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.415 15:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.415 [2024-10-01 15:22:28.752918] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:49.415 [2024-10-01 15:22:28.752972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881353 ] 00:04:49.415 [2024-10-01 15:22:28.785110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:49.415 [2024-10-01 15:22:28.832923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.415 [2024-10-01 15:22:28.864516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.357 [2024-10-01 15:22:29.596887] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:50.357 [2024-10-01 15:22:29.596948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881578 ] 00:04:50.357 [2024-10-01 15:22:29.626920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:50.357 [2024-10-01 15:22:29.674420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.357 [2024-10-01 15:22:29.705616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.357 [2024-10-01 15:22:29.705675] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.357 [2024-10-01 15:22:29.705685] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.357 [2024-10-01 15:22:29.705692] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2881353 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2881353 ']' 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2881353 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.357 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2881353 00:04:50.618 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.618 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.618 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2881353' 00:04:50.618 killing process with pid 2881353 00:04:50.618 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2881353 00:04:50.618 15:22:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2881353 00:04:50.618 00:04:50.618 real 0m1.311s 00:04:50.618 user 0m1.506s 00:04:50.618 sys 0m0.399s 00:04:50.618 15:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.618 15:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.618 ************************************ 00:04:50.618 END TEST exit_on_failed_rpc_init 00:04:50.618 ************************************ 00:04:50.618 15:22:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.618 00:04:50.618 real 0m13.742s 00:04:50.618 user 0m13.268s 00:04:50.618 sys 0m1.604s 00:04:50.618 15:22:30 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.618 15:22:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.618 ************************************ 00:04:50.618 END TEST skip_rpc 00:04:50.618 ************************************ 00:04:50.880 15:22:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.880 15:22:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.880 15:22:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.880 15:22:30 -- common/autotest_common.sh@10 -- # set +x 00:04:50.880 ************************************ 00:04:50.880 START TEST rpc_client 00:04:50.880 ************************************ 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.880 * Looking for test storage... 00:04:50.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.880 15:22:30 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:50.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.880 --rc genhtml_branch_coverage=1 00:04:50.880 --rc genhtml_function_coverage=1 00:04:50.880 --rc genhtml_legend=1 00:04:50.880 --rc geninfo_all_blocks=1 00:04:50.880 --rc geninfo_unexecuted_blocks=1 00:04:50.880 00:04:50.880 ' 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:50.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.880 --rc genhtml_branch_coverage=1 00:04:50.880 --rc genhtml_function_coverage=1 00:04:50.880 --rc genhtml_legend=1 00:04:50.880 --rc geninfo_all_blocks=1 00:04:50.880 --rc geninfo_unexecuted_blocks=1 00:04:50.880 00:04:50.880 ' 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:50.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.880 --rc genhtml_branch_coverage=1 00:04:50.880 --rc genhtml_function_coverage=1 00:04:50.880 --rc genhtml_legend=1 00:04:50.880 --rc geninfo_all_blocks=1 00:04:50.880 --rc geninfo_unexecuted_blocks=1 00:04:50.880 00:04:50.880 ' 00:04:50.880 15:22:30 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:50.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.880 --rc genhtml_branch_coverage=1 00:04:50.880 --rc genhtml_function_coverage=1 00:04:50.880 --rc genhtml_legend=1 00:04:50.880 --rc geninfo_all_blocks=1 00:04:50.880 --rc geninfo_unexecuted_blocks=1 00:04:50.880 00:04:50.880 ' 00:04:50.880 15:22:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:51.140 OK 00:04:51.140 15:22:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.140 00:04:51.140 real 0m0.226s 00:04:51.140 user 0m0.123s 00:04:51.140 sys 0m0.117s 00:04:51.140 15:22:30 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.140 15:22:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.141 ************************************ 00:04:51.141 END TEST rpc_client 00:04:51.141 ************************************ 00:04:51.141 15:22:30 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.141 15:22:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.141 15:22:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.141 15:22:30 -- common/autotest_common.sh@10 -- # set +x 00:04:51.141 ************************************ 00:04:51.141 START TEST json_config 00:04:51.141 ************************************ 00:04:51.141 15:22:30 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.141 15:22:30 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:51.141 15:22:30 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:51.141 15:22:30 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:51.141 15:22:30 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:51.141 15:22:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.141 15:22:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.141 15:22:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.141 15:22:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.141 15:22:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.141 15:22:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.141 15:22:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.141 15:22:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.141 15:22:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.141 15:22:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.141 15:22:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.141 15:22:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:51.141 15:22:30 json_config -- scripts/common.sh@345 -- # : 1 00:04:51.141 15:22:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.141 15:22:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.141 15:22:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:51.401 15:22:30 json_config -- scripts/common.sh@353 -- # local d=1 00:04:51.401 15:22:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.401 15:22:30 json_config -- scripts/common.sh@355 -- # echo 1 00:04:51.401 15:22:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.401 15:22:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:51.401 15:22:30 json_config -- scripts/common.sh@353 -- # local d=2 00:04:51.402 15:22:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.402 15:22:30 json_config -- scripts/common.sh@355 -- # echo 2 00:04:51.402 15:22:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.402 15:22:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.402 15:22:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.402 15:22:30 json_config -- scripts/common.sh@368 -- # return 0 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.402 --rc genhtml_branch_coverage=1 00:04:51.402 --rc genhtml_function_coverage=1 00:04:51.402 --rc genhtml_legend=1 00:04:51.402 --rc geninfo_all_blocks=1 00:04:51.402 --rc geninfo_unexecuted_blocks=1 00:04:51.402 00:04:51.402 ' 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.402 --rc genhtml_branch_coverage=1 00:04:51.402 --rc genhtml_function_coverage=1 00:04:51.402 --rc genhtml_legend=1 00:04:51.402 --rc geninfo_all_blocks=1 00:04:51.402 --rc geninfo_unexecuted_blocks=1 00:04:51.402 00:04:51.402 ' 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.402 --rc genhtml_branch_coverage=1 00:04:51.402 --rc genhtml_function_coverage=1 00:04:51.402 --rc genhtml_legend=1 00:04:51.402 --rc geninfo_all_blocks=1 00:04:51.402 --rc geninfo_unexecuted_blocks=1 00:04:51.402 00:04:51.402 ' 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.402 --rc genhtml_branch_coverage=1 00:04:51.402 --rc genhtml_function_coverage=1 00:04:51.402 --rc genhtml_legend=1 00:04:51.402 --rc geninfo_all_blocks=1 00:04:51.402 --rc geninfo_unexecuted_blocks=1 00:04:51.402 00:04:51.402 ' 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.402 15:22:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.402 15:22:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.402 15:22:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.402 15:22:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.402 15:22:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.402 15:22:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.402 15:22:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.402 15:22:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.402 15:22:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@51 -- # : 0 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.402 15:22:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:51.402 INFO: JSON configuration test init 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.402 15:22:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.402 15:22:30 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.403 15:22:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.403 15:22:30 json_config -- json_config/common.sh@10 -- # shift 00:04:51.403 15:22:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.403 15:22:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.403 15:22:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.403 15:22:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.403 15:22:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.403 15:22:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2881820 00:04:51.403 15:22:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.403 Waiting for target to run... 00:04:51.403 15:22:30 json_config -- json_config/common.sh@25 -- # waitforlisten 2881820 /var/tmp/spdk_tgt.sock 00:04:51.403 15:22:30 json_config -- common/autotest_common.sh@831 -- # '[' -z 2881820 ']' 00:04:51.403 15:22:30 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.403 15:22:30 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.403 15:22:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.403 15:22:30 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.403 15:22:30 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.403 15:22:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.403 [2024-10-01 15:22:30.717988] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:51.403 [2024-10-01 15:22:30.718056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881820 ] 00:04:51.663 [2024-10-01 15:22:30.981220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.663 [2024-10-01 15:22:31.031678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.663 [2024-10-01 15:22:31.053710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.234 15:22:31 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.234 15:22:31 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:52.234 15:22:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.234 00:04:52.234 15:22:31 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:52.234 15:22:31 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:52.234 15:22:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.234 15:22:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.234 15:22:31 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:52.234 15:22:31 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:52.234 15:22:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.234 15:22:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.234 15:22:31 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.234 15:22:31 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:52.234 15:22:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:52.806 15:22:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.806 15:22:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:52.806 15:22:32 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:52.806 15:22:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@54 -- # sort 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:53.068 15:22:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.068 15:22:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:53.068 15:22:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.068 15:22:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.068 15:22:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.068 MallocForNvmf0 00:04:53.068 15:22:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.068 15:22:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.330 MallocForNvmf1 00:04:53.330 15:22:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.330 15:22:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.591 [2024-10-01 15:22:32.822184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.591 15:22:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.591 15:22:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.591 15:22:33 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.591 15:22:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.853 15:22:33 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.853 15:22:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:54.114 15:22:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.114 15:22:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.114 [2024-10-01 15:22:33.544368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:54.376 15:22:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:54.376 15:22:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.376 15:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.376 15:22:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:54.376 15:22:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.376 15:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.376 15:22:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:54.376 15:22:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:54.376 15:22:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:54.376 MallocBdevForConfigChangeCheck 00:04:54.376 15:22:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:54.376 15:22:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.376 15:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.636 15:22:33 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:54.636 15:22:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.897 15:22:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:54.897 INFO: shutting down applications... 00:04:54.897 15:22:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:54.897 15:22:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:54.897 15:22:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:54.897 15:22:34 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:55.158 Calling clear_iscsi_subsystem 00:04:55.158 Calling clear_nvmf_subsystem 00:04:55.158 Calling clear_nbd_subsystem 00:04:55.158 Calling clear_ublk_subsystem 00:04:55.158 Calling clear_vhost_blk_subsystem 00:04:55.158 Calling clear_vhost_scsi_subsystem 00:04:55.158 Calling clear_bdev_subsystem 00:04:55.418 15:22:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:55.418 15:22:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:55.418 15:22:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:55.418 15:22:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.418 15:22:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:55.418 15:22:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:55.678 15:22:34 json_config -- json_config/json_config.sh@352 -- # break 00:04:55.678 15:22:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:55.678 15:22:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:55.678 15:22:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:55.678 15:22:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:55.678 15:22:34 json_config -- json_config/common.sh@35 -- # [[ -n 2881820 ]] 00:04:55.678 15:22:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2881820 00:04:55.678 15:22:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:55.678 15:22:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.678 15:22:34 json_config -- json_config/common.sh@41 -- # kill -0 2881820 00:04:55.678 15:22:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.250 15:22:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.250 15:22:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.250 15:22:35 json_config -- json_config/common.sh@41 -- # kill -0 2881820 00:04:56.250 15:22:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.250 15:22:35 json_config -- json_config/common.sh@43 -- # break 00:04:56.250 15:22:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.250 15:22:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.250 SPDK target shutdown done 00:04:56.250 15:22:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:56.250 INFO: relaunching applications... 00:04:56.250 15:22:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.250 15:22:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.250 15:22:35 json_config -- json_config/common.sh@10 -- # shift 00:04:56.250 15:22:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.250 15:22:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.250 15:22:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.250 15:22:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.250 15:22:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.250 15:22:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2882958 00:04:56.250 15:22:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.250 Waiting for target to run... 00:04:56.250 15:22:35 json_config -- json_config/common.sh@25 -- # waitforlisten 2882958 /var/tmp/spdk_tgt.sock 00:04:56.250 15:22:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.250 15:22:35 json_config -- common/autotest_common.sh@831 -- # '[' -z 2882958 ']' 00:04:56.250 15:22:35 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.250 15:22:35 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.250 15:22:35 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.250 15:22:35 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.250 15:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.250 [2024-10-01 15:22:35.527268] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:56.250 [2024-10-01 15:22:35.527323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882958 ] 00:04:56.512 [2024-10-01 15:22:35.776879] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.512 [2024-10-01 15:22:35.825467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.512 [2024-10-01 15:22:35.844717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.084 [2024-10-01 15:22:36.316611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.084 [2024-10-01 15:22:36.349091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:57.084 15:22:36 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.084 15:22:36 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:57.084 15:22:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.084 00:04:57.084 15:22:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:57.084 15:22:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:57.084 INFO: Checking if target configuration is the same... 00:04:57.084 15:22:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.084 15:22:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:57.084 15:22:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.084 + '[' 2 -ne 2 ']' 00:04:57.084 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:57.084 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:57.084 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:57.084 +++ basename /dev/fd/62 00:04:57.084 ++ mktemp /tmp/62.XXX 00:04:57.084 + tmp_file_1=/tmp/62.9pE 00:04:57.084 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.084 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.084 + tmp_file_2=/tmp/spdk_tgt_config.json.9PI 00:04:57.084 + ret=0 00:04:57.084 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.345 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.345 + diff -u /tmp/62.9pE /tmp/spdk_tgt_config.json.9PI 00:04:57.345 + echo 'INFO: JSON config files are the same' 00:04:57.345 INFO: JSON config files are the same 00:04:57.345 + rm /tmp/62.9pE /tmp/spdk_tgt_config.json.9PI 00:04:57.345 + exit 0 00:04:57.345 15:22:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:57.345 15:22:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:57.345 INFO: changing configuration and checking if this can be detected... 00:04:57.345 15:22:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.345 15:22:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.607 15:22:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:57.607 15:22:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.607 15:22:36 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.607 + '[' 2 -ne 2 ']' 00:04:57.607 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:57.607 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:57.607 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:57.607 +++ basename /dev/fd/62 00:04:57.607 ++ mktemp /tmp/62.XXX 00:04:57.607 + tmp_file_1=/tmp/62.MM0 00:04:57.607 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.607 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.607 + tmp_file_2=/tmp/spdk_tgt_config.json.Dth 00:04:57.607 + ret=0 00:04:57.607 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.868 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.868 + diff -u /tmp/62.MM0 /tmp/spdk_tgt_config.json.Dth 00:04:58.128 + ret=1 00:04:58.128 + echo '=== Start of file: /tmp/62.MM0 ===' 00:04:58.128 + cat /tmp/62.MM0 00:04:58.128 + echo '=== End of file: /tmp/62.MM0 ===' 00:04:58.128 + echo '' 00:04:58.128 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Dth ===' 00:04:58.128 + cat /tmp/spdk_tgt_config.json.Dth 00:04:58.128 + echo '=== End of file: /tmp/spdk_tgt_config.json.Dth ===' 00:04:58.128 + echo '' 00:04:58.128 + rm /tmp/62.MM0 /tmp/spdk_tgt_config.json.Dth 00:04:58.128 + exit 1 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:58.128 INFO: configuration change detected. 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:58.128 15:22:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.128 15:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 2882958 ]] 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:58.128 15:22:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.128 15:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:58.128 15:22:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:58.129 15:22:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:58.129 15:22:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:58.129 15:22:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:58.129 15:22:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.129 15:22:37 json_config -- json_config/json_config.sh@330 -- # killprocess 2882958 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@950 -- # '[' -z 2882958 ']' 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@954 -- # kill -0 2882958 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@955 -- # uname 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2882958 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2882958' 00:04:58.129 killing process with pid 2882958 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@969 -- # kill 2882958 00:04:58.129 15:22:37 json_config -- common/autotest_common.sh@974 -- # wait 2882958 00:04:58.389 15:22:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.389 15:22:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:58.389 15:22:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.389 15:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.389 15:22:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:58.389 15:22:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:58.389 INFO: Success 00:04:58.389 00:04:58.389 real 0m7.330s 00:04:58.389 user 0m9.105s 00:04:58.389 sys 0m1.770s 00:04:58.389 15:22:37 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.389 15:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.389 ************************************ 00:04:58.389 END TEST json_config 00:04:58.389 ************************************ 00:04:58.389 15:22:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.389 15:22:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.389 15:22:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.389 15:22:37 -- common/autotest_common.sh@10 -- # set +x 00:04:58.389 ************************************ 00:04:58.389 START TEST json_config_extra_key 00:04:58.389 ************************************ 00:04:58.389 15:22:37 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.652 15:22:37 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.652 15:22:37 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.652 15:22:37 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.652 15:22:37 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.652 15:22:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.652 15:22:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.652 15:22:38 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.652 15:22:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.652 --rc genhtml_branch_coverage=1 00:04:58.652 --rc genhtml_function_coverage=1 00:04:58.652 --rc genhtml_legend=1 00:04:58.652 --rc geninfo_all_blocks=1 00:04:58.652 --rc geninfo_unexecuted_blocks=1 00:04:58.652 00:04:58.652 ' 00:04:58.652 15:22:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.652 --rc genhtml_branch_coverage=1 00:04:58.652 --rc genhtml_function_coverage=1 00:04:58.652 --rc genhtml_legend=1 00:04:58.652 --rc geninfo_all_blocks=1 00:04:58.652 --rc geninfo_unexecuted_blocks=1 00:04:58.652 00:04:58.652 ' 00:04:58.652 15:22:38 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.653 --rc genhtml_branch_coverage=1 00:04:58.653 --rc genhtml_function_coverage=1 00:04:58.653 --rc genhtml_legend=1 00:04:58.653 --rc geninfo_all_blocks=1 00:04:58.653 --rc geninfo_unexecuted_blocks=1 00:04:58.653 00:04:58.653 ' 00:04:58.653 15:22:38 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.653 --rc genhtml_branch_coverage=1 00:04:58.653 --rc genhtml_function_coverage=1 00:04:58.653 --rc genhtml_legend=1 00:04:58.653 --rc geninfo_all_blocks=1 00:04:58.653 --rc geninfo_unexecuted_blocks=1 00:04:58.653 00:04:58.653 ' 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.653 15:22:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.653 15:22:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.653 15:22:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.653 15:22:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.653 15:22:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.653 15:22:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.653 15:22:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.653 15:22:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.653 15:22:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.653 15:22:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.653 INFO: launching applications... 00:04:58.653 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2883671 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.653 Waiting for target to run... 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2883671 /var/tmp/spdk_tgt.sock 00:04:58.653 15:22:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2883671 ']' 00:04:58.653 15:22:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.653 15:22:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.653 15:22:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.653 15:22:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.653 15:22:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.653 15:22:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.915 [2024-10-01 15:22:38.119737] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:04:58.915 [2024-10-01 15:22:38.119814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883671 ] 00:04:59.175 [2024-10-01 15:22:38.512013] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:59.175 [2024-10-01 15:22:38.561478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.175 [2024-10-01 15:22:38.579840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.743 15:22:38 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.743 15:22:38 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:59.743 15:22:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.743 00:04:59.743 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.743 INFO: shutting down applications... 00:04:59.743 15:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.743 15:22:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.743 15:22:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.744 15:22:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2883671 ]] 00:04:59.744 15:22:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2883671 00:04:59.744 15:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.744 15:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.744 15:22:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2883671 00:04:59.744 15:22:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.003 15:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.003 15:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.003 15:22:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2883671 00:05:00.003 15:22:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.003 15:22:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.003 15:22:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.003 15:22:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.003 SPDK target shutdown done 00:05:00.003 15:22:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.003 Success 00:05:00.003 00:05:00.003 real 0m1.585s 00:05:00.003 user 0m1.059s 00:05:00.003 sys 0m0.564s 00:05:00.003 15:22:39 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.003 15:22:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.003 ************************************ 00:05:00.003 END TEST json_config_extra_key 00:05:00.003 ************************************ 00:05:00.264 15:22:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.264 15:22:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.264 15:22:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.264 15:22:39 -- common/autotest_common.sh@10 -- # set +x 00:05:00.264 ************************************ 00:05:00.264 START TEST alias_rpc 00:05:00.264 ************************************ 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.265 * Looking for test storage... 00:05:00.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.265 15:22:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:00.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.265 --rc genhtml_branch_coverage=1 00:05:00.265 --rc genhtml_function_coverage=1 00:05:00.265 --rc genhtml_legend=1 00:05:00.265 --rc geninfo_all_blocks=1 00:05:00.265 --rc geninfo_unexecuted_blocks=1 00:05:00.265 00:05:00.265 ' 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:00.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.265 --rc genhtml_branch_coverage=1 00:05:00.265 --rc genhtml_function_coverage=1 00:05:00.265 --rc genhtml_legend=1 00:05:00.265 --rc geninfo_all_blocks=1 00:05:00.265 --rc geninfo_unexecuted_blocks=1 00:05:00.265 00:05:00.265 ' 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:00.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.265 --rc genhtml_branch_coverage=1 00:05:00.265 --rc genhtml_function_coverage=1 00:05:00.265 --rc genhtml_legend=1 00:05:00.265 --rc geninfo_all_blocks=1 00:05:00.265 --rc geninfo_unexecuted_blocks=1 00:05:00.265 00:05:00.265 ' 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:00.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.265 --rc genhtml_branch_coverage=1 00:05:00.265 --rc genhtml_function_coverage=1 00:05:00.265 --rc genhtml_legend=1 00:05:00.265 --rc geninfo_all_blocks=1 00:05:00.265 --rc geninfo_unexecuted_blocks=1 00:05:00.265 00:05:00.265 ' 00:05:00.265 15:22:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.265 15:22:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2884031 00:05:00.265 15:22:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2884031 00:05:00.265 15:22:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2884031 ']' 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.265 15:22:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.526 [2024-10-01 15:22:39.765235] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:00.526 [2024-10-01 15:22:39.765306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884031 ] 00:05:00.526 [2024-10-01 15:22:39.800209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:00.526 [2024-10-01 15:22:39.848107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.526 [2024-10-01 15:22:39.888027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.467 15:22:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:01.467 15:22:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2884031 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2884031 ']' 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2884031 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2884031 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2884031' 00:05:01.467 killing process with pid 2884031 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@969 -- # kill 2884031 00:05:01.467 15:22:40 alias_rpc -- common/autotest_common.sh@974 -- # wait 2884031 00:05:01.727 00:05:01.727 real 0m1.527s 00:05:01.727 user 0m1.686s 00:05:01.727 sys 0m0.434s 00:05:01.727 15:22:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.727 15:22:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 END TEST alias_rpc 00:05:01.727 ************************************ 00:05:01.727 15:22:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:01.727 15:22:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.727 15:22:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.727 15:22:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.727 15:22:41 -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 START TEST spdkcli_tcp 00:05:01.727 ************************************ 00:05:01.727 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.989 * Looking for test storage... 00:05:01.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.989 15:22:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.989 --rc genhtml_branch_coverage=1 00:05:01.989 --rc genhtml_function_coverage=1 00:05:01.989 --rc genhtml_legend=1 00:05:01.989 --rc geninfo_all_blocks=1 00:05:01.989 --rc geninfo_unexecuted_blocks=1 00:05:01.989 00:05:01.989 ' 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.989 --rc genhtml_branch_coverage=1 00:05:01.989 --rc genhtml_function_coverage=1 00:05:01.989 --rc genhtml_legend=1 00:05:01.989 --rc geninfo_all_blocks=1 00:05:01.989 --rc geninfo_unexecuted_blocks=1 00:05:01.989 00:05:01.989 ' 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.989 --rc genhtml_branch_coverage=1 00:05:01.989 --rc genhtml_function_coverage=1 00:05:01.989 --rc genhtml_legend=1 00:05:01.989 --rc geninfo_all_blocks=1 00:05:01.989 --rc geninfo_unexecuted_blocks=1 00:05:01.989 00:05:01.989 ' 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.989 --rc genhtml_branch_coverage=1 00:05:01.989 --rc genhtml_function_coverage=1 00:05:01.989 --rc genhtml_legend=1 00:05:01.989 --rc geninfo_all_blocks=1 00:05:01.989 --rc geninfo_unexecuted_blocks=1 00:05:01.989 00:05:01.989 ' 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2884372 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2884372 00:05:01.989 15:22:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2884372 ']' 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.989 15:22:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.989 [2024-10-01 15:22:41.379200] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:01.989 [2024-10-01 15:22:41.379280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884372 ] 00:05:01.989 [2024-10-01 15:22:41.413845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:02.250 [2024-10-01 15:22:41.458636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.250 [2024-10-01 15:22:41.494683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.250 [2024-10-01 15:22:41.494684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.820 15:22:42 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.820 15:22:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:02.820 15:22:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2884555 00:05:02.821 15:22:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.821 15:22:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.081 [ 00:05:03.081 "bdev_malloc_delete", 00:05:03.081 "bdev_malloc_create", 00:05:03.081 "bdev_null_resize", 00:05:03.081 "bdev_null_delete", 00:05:03.081 "bdev_null_create", 00:05:03.081 "bdev_nvme_cuse_unregister", 00:05:03.081 "bdev_nvme_cuse_register", 00:05:03.081 "bdev_opal_new_user", 00:05:03.081 "bdev_opal_set_lock_state", 00:05:03.081 "bdev_opal_delete", 00:05:03.081 "bdev_opal_get_info", 00:05:03.081 "bdev_opal_create", 00:05:03.081 "bdev_nvme_opal_revert", 00:05:03.081 "bdev_nvme_opal_init", 00:05:03.081 "bdev_nvme_send_cmd", 00:05:03.081 "bdev_nvme_set_keys", 00:05:03.081 "bdev_nvme_get_path_iostat", 00:05:03.081 "bdev_nvme_get_mdns_discovery_info", 00:05:03.081 "bdev_nvme_stop_mdns_discovery", 00:05:03.081 "bdev_nvme_start_mdns_discovery", 00:05:03.081 "bdev_nvme_set_multipath_policy", 00:05:03.081 "bdev_nvme_set_preferred_path", 00:05:03.081 "bdev_nvme_get_io_paths", 00:05:03.081 "bdev_nvme_remove_error_injection", 00:05:03.081 "bdev_nvme_add_error_injection", 00:05:03.081 "bdev_nvme_get_discovery_info", 00:05:03.081 "bdev_nvme_stop_discovery", 00:05:03.081 "bdev_nvme_start_discovery", 00:05:03.081 "bdev_nvme_get_controller_health_info", 00:05:03.081 "bdev_nvme_disable_controller", 00:05:03.081 "bdev_nvme_enable_controller", 00:05:03.081 "bdev_nvme_reset_controller", 00:05:03.081 "bdev_nvme_get_transport_statistics", 00:05:03.081 "bdev_nvme_apply_firmware", 00:05:03.081 "bdev_nvme_detach_controller", 00:05:03.081 "bdev_nvme_get_controllers", 00:05:03.081 "bdev_nvme_attach_controller", 00:05:03.081 "bdev_nvme_set_hotplug", 00:05:03.081 "bdev_nvme_set_options", 00:05:03.081 "bdev_passthru_delete", 00:05:03.081 "bdev_passthru_create", 00:05:03.081 "bdev_lvol_set_parent_bdev", 00:05:03.081 "bdev_lvol_set_parent", 00:05:03.081 "bdev_lvol_check_shallow_copy", 00:05:03.081 "bdev_lvol_start_shallow_copy", 00:05:03.081 "bdev_lvol_grow_lvstore", 00:05:03.081 "bdev_lvol_get_lvols", 00:05:03.081 "bdev_lvol_get_lvstores", 00:05:03.081 "bdev_lvol_delete", 00:05:03.081 "bdev_lvol_set_read_only", 00:05:03.081 "bdev_lvol_resize", 00:05:03.081 "bdev_lvol_decouple_parent", 00:05:03.081 "bdev_lvol_inflate", 00:05:03.081 "bdev_lvol_rename", 00:05:03.081 "bdev_lvol_clone_bdev", 00:05:03.081 "bdev_lvol_clone", 00:05:03.081 "bdev_lvol_snapshot", 00:05:03.081 "bdev_lvol_create", 00:05:03.081 "bdev_lvol_delete_lvstore", 00:05:03.081 "bdev_lvol_rename_lvstore", 00:05:03.082 "bdev_lvol_create_lvstore", 00:05:03.082 "bdev_raid_set_options", 00:05:03.082 "bdev_raid_remove_base_bdev", 00:05:03.082 "bdev_raid_add_base_bdev", 00:05:03.082 "bdev_raid_delete", 00:05:03.082 "bdev_raid_create", 00:05:03.082 "bdev_raid_get_bdevs", 00:05:03.082 "bdev_error_inject_error", 00:05:03.082 "bdev_error_delete", 00:05:03.082 "bdev_error_create", 00:05:03.082 "bdev_split_delete", 00:05:03.082 "bdev_split_create", 00:05:03.082 "bdev_delay_delete", 00:05:03.082 "bdev_delay_create", 00:05:03.082 "bdev_delay_update_latency", 00:05:03.082 "bdev_zone_block_delete", 00:05:03.082 "bdev_zone_block_create", 00:05:03.082 "blobfs_create", 00:05:03.082 "blobfs_detect", 00:05:03.082 "blobfs_set_cache_size", 00:05:03.082 "bdev_aio_delete", 00:05:03.082 "bdev_aio_rescan", 00:05:03.082 "bdev_aio_create", 00:05:03.082 "bdev_ftl_set_property", 00:05:03.082 "bdev_ftl_get_properties", 00:05:03.082 "bdev_ftl_get_stats", 00:05:03.082 "bdev_ftl_unmap", 00:05:03.082 "bdev_ftl_unload", 00:05:03.082 "bdev_ftl_delete", 00:05:03.082 "bdev_ftl_load", 00:05:03.082 "bdev_ftl_create", 00:05:03.082 "bdev_virtio_attach_controller", 00:05:03.082 "bdev_virtio_scsi_get_devices", 00:05:03.082 "bdev_virtio_detach_controller", 00:05:03.082 "bdev_virtio_blk_set_hotplug", 00:05:03.082 "bdev_iscsi_delete", 00:05:03.082 "bdev_iscsi_create", 00:05:03.082 "bdev_iscsi_set_options", 00:05:03.082 "accel_error_inject_error", 00:05:03.082 "ioat_scan_accel_module", 00:05:03.082 "dsa_scan_accel_module", 00:05:03.082 "iaa_scan_accel_module", 00:05:03.082 "vfu_virtio_create_fs_endpoint", 00:05:03.082 "vfu_virtio_create_scsi_endpoint", 00:05:03.082 "vfu_virtio_scsi_remove_target", 00:05:03.082 "vfu_virtio_scsi_add_target", 00:05:03.082 "vfu_virtio_create_blk_endpoint", 00:05:03.082 "vfu_virtio_delete_endpoint", 00:05:03.082 "keyring_file_remove_key", 00:05:03.082 "keyring_file_add_key", 00:05:03.082 "keyring_linux_set_options", 00:05:03.082 "fsdev_aio_delete", 00:05:03.082 "fsdev_aio_create", 00:05:03.082 "iscsi_get_histogram", 00:05:03.082 "iscsi_enable_histogram", 00:05:03.082 "iscsi_set_options", 00:05:03.082 "iscsi_get_auth_groups", 00:05:03.082 "iscsi_auth_group_remove_secret", 00:05:03.082 "iscsi_auth_group_add_secret", 00:05:03.082 "iscsi_delete_auth_group", 00:05:03.082 "iscsi_create_auth_group", 00:05:03.082 "iscsi_set_discovery_auth", 00:05:03.082 "iscsi_get_options", 00:05:03.082 "iscsi_target_node_request_logout", 00:05:03.082 "iscsi_target_node_set_redirect", 00:05:03.082 "iscsi_target_node_set_auth", 00:05:03.082 "iscsi_target_node_add_lun", 00:05:03.082 "iscsi_get_stats", 00:05:03.082 "iscsi_get_connections", 00:05:03.082 "iscsi_portal_group_set_auth", 00:05:03.082 "iscsi_start_portal_group", 00:05:03.082 "iscsi_delete_portal_group", 00:05:03.082 "iscsi_create_portal_group", 00:05:03.082 "iscsi_get_portal_groups", 00:05:03.082 "iscsi_delete_target_node", 00:05:03.082 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.082 "iscsi_target_node_add_pg_ig_maps", 00:05:03.082 "iscsi_create_target_node", 00:05:03.082 "iscsi_get_target_nodes", 00:05:03.082 "iscsi_delete_initiator_group", 00:05:03.082 "iscsi_initiator_group_remove_initiators", 00:05:03.082 "iscsi_initiator_group_add_initiators", 00:05:03.082 "iscsi_create_initiator_group", 00:05:03.082 "iscsi_get_initiator_groups", 00:05:03.082 "nvmf_set_crdt", 00:05:03.082 "nvmf_set_config", 00:05:03.082 "nvmf_set_max_subsystems", 00:05:03.082 "nvmf_stop_mdns_prr", 00:05:03.082 "nvmf_publish_mdns_prr", 00:05:03.082 "nvmf_subsystem_get_listeners", 00:05:03.082 "nvmf_subsystem_get_qpairs", 00:05:03.082 "nvmf_subsystem_get_controllers", 00:05:03.082 "nvmf_get_stats", 00:05:03.082 "nvmf_get_transports", 00:05:03.082 "nvmf_create_transport", 00:05:03.082 "nvmf_get_targets", 00:05:03.082 "nvmf_delete_target", 00:05:03.082 "nvmf_create_target", 00:05:03.082 "nvmf_subsystem_allow_any_host", 00:05:03.082 "nvmf_subsystem_set_keys", 00:05:03.082 "nvmf_subsystem_remove_host", 00:05:03.082 "nvmf_subsystem_add_host", 00:05:03.082 "nvmf_ns_remove_host", 00:05:03.082 "nvmf_ns_add_host", 00:05:03.082 "nvmf_subsystem_remove_ns", 00:05:03.082 "nvmf_subsystem_set_ns_ana_group", 00:05:03.082 "nvmf_subsystem_add_ns", 00:05:03.082 "nvmf_subsystem_listener_set_ana_state", 00:05:03.082 "nvmf_discovery_get_referrals", 00:05:03.082 "nvmf_discovery_remove_referral", 00:05:03.082 "nvmf_discovery_add_referral", 00:05:03.082 "nvmf_subsystem_remove_listener", 00:05:03.082 "nvmf_subsystem_add_listener", 00:05:03.082 "nvmf_delete_subsystem", 00:05:03.082 "nvmf_create_subsystem", 00:05:03.082 "nvmf_get_subsystems", 00:05:03.082 "env_dpdk_get_mem_stats", 00:05:03.082 "nbd_get_disks", 00:05:03.082 "nbd_stop_disk", 00:05:03.082 "nbd_start_disk", 00:05:03.082 "ublk_recover_disk", 00:05:03.082 "ublk_get_disks", 00:05:03.082 "ublk_stop_disk", 00:05:03.082 "ublk_start_disk", 00:05:03.082 "ublk_destroy_target", 00:05:03.082 "ublk_create_target", 00:05:03.082 "virtio_blk_create_transport", 00:05:03.082 "virtio_blk_get_transports", 00:05:03.082 "vhost_controller_set_coalescing", 00:05:03.082 "vhost_get_controllers", 00:05:03.082 "vhost_delete_controller", 00:05:03.082 "vhost_create_blk_controller", 00:05:03.082 "vhost_scsi_controller_remove_target", 00:05:03.082 "vhost_scsi_controller_add_target", 00:05:03.082 "vhost_start_scsi_controller", 00:05:03.082 "vhost_create_scsi_controller", 00:05:03.082 "thread_set_cpumask", 00:05:03.082 "scheduler_set_options", 00:05:03.082 "framework_get_governor", 00:05:03.082 "framework_get_scheduler", 00:05:03.082 "framework_set_scheduler", 00:05:03.082 "framework_get_reactors", 00:05:03.082 "thread_get_io_channels", 00:05:03.082 "thread_get_pollers", 00:05:03.082 "thread_get_stats", 00:05:03.082 "framework_monitor_context_switch", 00:05:03.082 "spdk_kill_instance", 00:05:03.082 "log_enable_timestamps", 00:05:03.082 "log_get_flags", 00:05:03.082 "log_clear_flag", 00:05:03.082 "log_set_flag", 00:05:03.082 "log_get_level", 00:05:03.082 "log_set_level", 00:05:03.082 "log_get_print_level", 00:05:03.082 "log_set_print_level", 00:05:03.082 "framework_enable_cpumask_locks", 00:05:03.082 "framework_disable_cpumask_locks", 00:05:03.082 "framework_wait_init", 00:05:03.082 "framework_start_init", 00:05:03.082 "scsi_get_devices", 00:05:03.082 "bdev_get_histogram", 00:05:03.082 "bdev_enable_histogram", 00:05:03.082 "bdev_set_qos_limit", 00:05:03.082 "bdev_set_qd_sampling_period", 00:05:03.082 "bdev_get_bdevs", 00:05:03.082 "bdev_reset_iostat", 00:05:03.082 "bdev_get_iostat", 00:05:03.082 "bdev_examine", 00:05:03.082 "bdev_wait_for_examine", 00:05:03.082 "bdev_set_options", 00:05:03.082 "accel_get_stats", 00:05:03.082 "accel_set_options", 00:05:03.082 "accel_set_driver", 00:05:03.082 "accel_crypto_key_destroy", 00:05:03.082 "accel_crypto_keys_get", 00:05:03.082 "accel_crypto_key_create", 00:05:03.082 "accel_assign_opc", 00:05:03.082 "accel_get_module_info", 00:05:03.082 "accel_get_opc_assignments", 00:05:03.082 "vmd_rescan", 00:05:03.082 "vmd_remove_device", 00:05:03.082 "vmd_enable", 00:05:03.082 "sock_get_default_impl", 00:05:03.082 "sock_set_default_impl", 00:05:03.082 "sock_impl_set_options", 00:05:03.082 "sock_impl_get_options", 00:05:03.082 "iobuf_get_stats", 00:05:03.082 "iobuf_set_options", 00:05:03.082 "keyring_get_keys", 00:05:03.082 "vfu_tgt_set_base_path", 00:05:03.082 "framework_get_pci_devices", 00:05:03.082 "framework_get_config", 00:05:03.082 "framework_get_subsystems", 00:05:03.082 "fsdev_set_opts", 00:05:03.082 "fsdev_get_opts", 00:05:03.082 "trace_get_info", 00:05:03.082 "trace_get_tpoint_group_mask", 00:05:03.082 "trace_disable_tpoint_group", 00:05:03.082 "trace_enable_tpoint_group", 00:05:03.082 "trace_clear_tpoint_mask", 00:05:03.082 "trace_set_tpoint_mask", 00:05:03.082 "notify_get_notifications", 00:05:03.082 "notify_get_types", 00:05:03.082 "spdk_get_version", 00:05:03.082 "rpc_get_methods" 00:05:03.082 ] 00:05:03.082 15:22:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.082 15:22:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.082 15:22:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2884372 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2884372 ']' 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2884372 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2884372 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2884372' 00:05:03.082 killing process with pid 2884372 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2884372 00:05:03.082 15:22:42 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2884372 00:05:03.343 00:05:03.343 real 0m1.526s 00:05:03.343 user 0m2.755s 00:05:03.343 sys 0m0.463s 00:05:03.343 15:22:42 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.343 15:22:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.343 ************************************ 00:05:03.343 END TEST spdkcli_tcp 00:05:03.343 ************************************ 00:05:03.343 15:22:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.343 15:22:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.343 15:22:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.343 15:22:42 -- common/autotest_common.sh@10 -- # set +x 00:05:03.343 ************************************ 00:05:03.343 START TEST dpdk_mem_utility 00:05:03.343 ************************************ 00:05:03.343 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.343 * Looking for test storage... 00:05:03.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.604 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.604 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.604 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.604 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.604 15:22:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.605 15:22:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.605 --rc genhtml_branch_coverage=1 00:05:03.605 --rc genhtml_function_coverage=1 00:05:03.605 --rc genhtml_legend=1 00:05:03.605 --rc geninfo_all_blocks=1 00:05:03.605 --rc geninfo_unexecuted_blocks=1 00:05:03.605 00:05:03.605 ' 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.605 --rc genhtml_branch_coverage=1 00:05:03.605 --rc genhtml_function_coverage=1 00:05:03.605 --rc genhtml_legend=1 00:05:03.605 --rc geninfo_all_blocks=1 00:05:03.605 --rc geninfo_unexecuted_blocks=1 00:05:03.605 00:05:03.605 ' 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.605 --rc genhtml_branch_coverage=1 00:05:03.605 --rc genhtml_function_coverage=1 00:05:03.605 --rc genhtml_legend=1 00:05:03.605 --rc geninfo_all_blocks=1 00:05:03.605 --rc geninfo_unexecuted_blocks=1 00:05:03.605 00:05:03.605 ' 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.605 --rc genhtml_branch_coverage=1 00:05:03.605 --rc genhtml_function_coverage=1 00:05:03.605 --rc genhtml_legend=1 00:05:03.605 --rc geninfo_all_blocks=1 00:05:03.605 --rc geninfo_unexecuted_blocks=1 00:05:03.605 00:05:03.605 ' 00:05:03.605 15:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.605 15:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2884720 00:05:03.605 15:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2884720 00:05:03.605 15:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2884720 ']' 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.605 15:22:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.605 [2024-10-01 15:22:42.967503] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:03.605 [2024-10-01 15:22:42.967577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884720 ] 00:05:03.605 [2024-10-01 15:22:43.002167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:03.605 [2024-10-01 15:22:43.048679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.865 [2024-10-01 15:22:43.082336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.436 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.436 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:04.436 15:22:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.436 15:22:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.436 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.436 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.436 { 00:05:04.436 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.436 } 00:05:04.436 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.436 15:22:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.436 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:04.436 1 heaps totaling size 860.000000 MiB 00:05:04.436 size: 860.000000 MiB heap id: 0 00:05:04.436 end heaps---------- 00:05:04.436 9 mempools totaling size 642.649841 MiB 00:05:04.436 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.436 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.436 size: 92.545471 MiB name: bdev_io_2884720 00:05:04.436 size: 51.011292 MiB name: evtpool_2884720 00:05:04.436 size: 50.003479 MiB name: msgpool_2884720 00:05:04.436 size: 36.509338 MiB name: fsdev_io_2884720 00:05:04.436 size: 21.763794 MiB name: PDU_Pool 00:05:04.436 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.436 size: 0.026123 MiB name: Session_Pool 00:05:04.436 end mempools------- 00:05:04.436 6 memzones totaling size 4.142822 MiB 00:05:04.436 size: 1.000366 MiB name: RG_ring_0_2884720 00:05:04.436 size: 1.000366 MiB name: RG_ring_1_2884720 00:05:04.436 size: 1.000366 MiB name: RG_ring_4_2884720 00:05:04.436 size: 1.000366 MiB name: RG_ring_5_2884720 00:05:04.436 size: 0.125366 MiB name: RG_ring_2_2884720 00:05:04.436 size: 0.015991 MiB name: RG_ring_3_2884720 00:05:04.436 end memzones------- 00:05:04.436 15:22:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.436 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:04.436 list of free elements. size: 13.984680 MiB 00:05:04.436 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:04.436 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:04.436 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:04.436 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:04.436 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:04.436 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:04.436 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:04.436 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:04.436 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:04.436 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:04.436 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:04.436 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:04.436 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:04.436 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:04.436 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:04.436 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:04.436 list of standard malloc elements. size: 199.218628 MiB 00:05:04.436 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:04.436 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:04.436 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:04.436 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:04.436 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:04.436 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.436 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:04.436 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.436 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:04.436 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:04.436 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:04.436 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:04.436 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:04.436 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:04.436 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.436 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.436 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:04.436 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:04.436 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:04.436 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:04.436 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:04.437 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:04.437 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:04.437 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:04.437 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:04.437 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:04.437 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:04.437 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:04.437 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:04.437 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:04.437 list of memzone associated elements. size: 646.796692 MiB 00:05:04.437 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:04.437 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.437 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:04.437 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.437 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:04.437 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2884720_0 00:05:04.437 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:04.437 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2884720_0 00:05:04.437 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:04.437 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2884720_0 00:05:04.437 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:04.437 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2884720_0 00:05:04.437 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:04.437 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.437 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:04.437 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.437 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:04.437 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2884720 00:05:04.437 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:04.437 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2884720 00:05:04.437 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.437 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2884720 00:05:04.437 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:04.437 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.437 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:04.437 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.437 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:04.437 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.437 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:04.437 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.437 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:04.437 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2884720 00:05:04.437 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:04.437 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2884720 00:05:04.437 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:04.437 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2884720 00:05:04.437 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:04.437 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2884720 00:05:04.437 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:04.437 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2884720 00:05:04.437 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:04.437 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2884720 00:05:04.437 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:04.437 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.437 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:04.437 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.437 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:04.437 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.437 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:04.437 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2884720 00:05:04.437 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:04.437 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.437 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:04.437 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.437 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:04.437 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2884720 00:05:04.437 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:04.437 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.437 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:04.437 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2884720 00:05:04.437 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:04.437 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2884720 00:05:04.437 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:04.437 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2884720 00:05:04.437 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:04.437 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.437 15:22:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.437 15:22:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2884720 00:05:04.437 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2884720 ']' 00:05:04.437 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2884720 00:05:04.437 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:04.437 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.437 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2884720 00:05:04.697 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.697 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.697 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2884720' 00:05:04.697 killing process with pid 2884720 00:05:04.697 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2884720 00:05:04.697 15:22:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2884720 00:05:04.697 00:05:04.697 real 0m1.409s 00:05:04.697 user 0m1.472s 00:05:04.697 sys 0m0.418s 00:05:04.697 15:22:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.697 15:22:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.697 ************************************ 00:05:04.697 END TEST dpdk_mem_utility 00:05:04.697 ************************************ 00:05:04.958 15:22:44 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.958 15:22:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.958 15:22:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.958 15:22:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.958 ************************************ 00:05:04.958 START TEST event 00:05:04.958 ************************************ 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.958 * Looking for test storage... 00:05:04.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.958 15:22:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.958 15:22:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.958 15:22:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.958 15:22:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.958 15:22:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.958 15:22:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.958 15:22:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.958 15:22:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.958 15:22:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.958 15:22:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.958 15:22:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.958 15:22:44 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.958 15:22:44 event -- scripts/common.sh@345 -- # : 1 00:05:04.958 15:22:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.958 15:22:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.958 15:22:44 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.958 15:22:44 event -- scripts/common.sh@353 -- # local d=1 00:05:04.958 15:22:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.958 15:22:44 event -- scripts/common.sh@355 -- # echo 1 00:05:04.958 15:22:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.958 15:22:44 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.958 15:22:44 event -- scripts/common.sh@353 -- # local d=2 00:05:04.958 15:22:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.958 15:22:44 event -- scripts/common.sh@355 -- # echo 2 00:05:04.958 15:22:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.958 15:22:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.958 15:22:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.958 15:22:44 event -- scripts/common.sh@368 -- # return 0 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.958 --rc genhtml_branch_coverage=1 00:05:04.958 --rc genhtml_function_coverage=1 00:05:04.958 --rc genhtml_legend=1 00:05:04.958 --rc geninfo_all_blocks=1 00:05:04.958 --rc geninfo_unexecuted_blocks=1 00:05:04.958 00:05:04.958 ' 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.958 --rc genhtml_branch_coverage=1 00:05:04.958 --rc genhtml_function_coverage=1 00:05:04.958 --rc genhtml_legend=1 00:05:04.958 --rc geninfo_all_blocks=1 00:05:04.958 --rc geninfo_unexecuted_blocks=1 00:05:04.958 00:05:04.958 ' 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.958 --rc genhtml_branch_coverage=1 00:05:04.958 --rc genhtml_function_coverage=1 00:05:04.958 --rc genhtml_legend=1 00:05:04.958 --rc geninfo_all_blocks=1 00:05:04.958 --rc geninfo_unexecuted_blocks=1 00:05:04.958 00:05:04.958 ' 00:05:04.958 15:22:44 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.958 --rc genhtml_branch_coverage=1 00:05:04.958 --rc genhtml_function_coverage=1 00:05:04.958 --rc genhtml_legend=1 00:05:04.958 --rc geninfo_all_blocks=1 00:05:04.958 --rc geninfo_unexecuted_blocks=1 00:05:04.958 00:05:04.958 ' 00:05:04.959 15:22:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:04.959 15:22:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.959 15:22:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.959 15:22:44 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:04.959 15:22:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.959 15:22:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.218 ************************************ 00:05:05.218 START TEST event_perf 00:05:05.218 ************************************ 00:05:05.218 15:22:44 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.218 Running I/O for 1 seconds...[2024-10-01 15:22:44.454976] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:05.218 [2024-10-01 15:22:44.455076] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885053 ] 00:05:05.218 [2024-10-01 15:22:44.493581] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:05.218 [2024-10-01 15:22:44.538779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.218 [2024-10-01 15:22:44.575593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.218 [2024-10-01 15:22:44.575752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.218 [2024-10-01 15:22:44.575923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.218 Running I/O for 1 seconds...[2024-10-01 15:22:44.575925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.158 00:05:06.158 lcore 0: 185477 00:05:06.158 lcore 1: 185477 00:05:06.158 lcore 2: 185479 00:05:06.158 lcore 3: 185480 00:05:06.158 done. 00:05:06.158 00:05:06.158 real 0m1.179s 00:05:06.158 user 0m4.082s 00:05:06.158 sys 0m0.094s 00:05:06.158 15:22:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.158 15:22:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.158 ************************************ 00:05:06.158 END TEST event_perf 00:05:06.158 ************************************ 00:05:06.418 15:22:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.418 15:22:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:06.418 15:22:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.418 15:22:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.418 ************************************ 00:05:06.418 START TEST event_reactor 00:05:06.418 ************************************ 00:05:06.418 15:22:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.418 [2024-10-01 15:22:45.712752] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:06.418 [2024-10-01 15:22:45.712841] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885388 ] 00:05:06.418 [2024-10-01 15:22:45.748341] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:06.418 [2024-10-01 15:22:45.796570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.418 [2024-10-01 15:22:45.826607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.799 test_start 00:05:07.799 oneshot 00:05:07.799 tick 100 00:05:07.799 tick 100 00:05:07.799 tick 250 00:05:07.799 tick 100 00:05:07.799 tick 100 00:05:07.799 tick 100 00:05:07.799 tick 250 00:05:07.799 tick 500 00:05:07.799 tick 100 00:05:07.799 tick 100 00:05:07.799 tick 250 00:05:07.799 tick 100 00:05:07.799 tick 100 00:05:07.799 test_end 00:05:07.799 00:05:07.799 real 0m1.171s 00:05:07.799 user 0m1.077s 00:05:07.799 sys 0m0.090s 00:05:07.799 15:22:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.799 15:22:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.799 ************************************ 00:05:07.799 END TEST event_reactor 00:05:07.799 ************************************ 00:05:07.799 15:22:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.799 15:22:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:07.799 15:22:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.799 15:22:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.799 ************************************ 00:05:07.799 START TEST event_reactor_perf 00:05:07.799 ************************************ 00:05:07.799 15:22:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.799 [2024-10-01 15:22:46.963733] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:07.799 [2024-10-01 15:22:46.963820] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885739 ] 00:05:07.799 [2024-10-01 15:22:46.999596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:07.799 [2024-10-01 15:22:47.045571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.799 [2024-10-01 15:22:47.077465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.737 test_start 00:05:08.737 test_end 00:05:08.737 Performance: 539531 events per second 00:05:08.737 00:05:08.737 real 0m1.171s 00:05:08.737 user 0m1.084s 00:05:08.737 sys 0m0.084s 00:05:08.737 15:22:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.737 15:22:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.737 ************************************ 00:05:08.737 END TEST event_reactor_perf 00:05:08.737 ************************************ 00:05:08.737 15:22:48 event -- event/event.sh@49 -- # uname -s 00:05:08.737 15:22:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.737 15:22:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.737 15:22:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.737 15:22:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.737 15:22:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.998 ************************************ 00:05:08.998 START TEST event_scheduler 00:05:08.998 ************************************ 00:05:08.998 15:22:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.998 * Looking for test storage... 00:05:08.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:08.998 15:22:48 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.998 15:22:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.999 15:22:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.999 --rc genhtml_branch_coverage=1 00:05:08.999 --rc genhtml_function_coverage=1 00:05:08.999 --rc genhtml_legend=1 00:05:08.999 --rc geninfo_all_blocks=1 00:05:08.999 --rc geninfo_unexecuted_blocks=1 00:05:08.999 00:05:08.999 ' 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.999 --rc genhtml_branch_coverage=1 00:05:08.999 --rc genhtml_function_coverage=1 00:05:08.999 --rc genhtml_legend=1 00:05:08.999 --rc geninfo_all_blocks=1 00:05:08.999 --rc geninfo_unexecuted_blocks=1 00:05:08.999 00:05:08.999 ' 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.999 --rc genhtml_branch_coverage=1 00:05:08.999 --rc genhtml_function_coverage=1 00:05:08.999 --rc genhtml_legend=1 00:05:08.999 --rc geninfo_all_blocks=1 00:05:08.999 --rc geninfo_unexecuted_blocks=1 00:05:08.999 00:05:08.999 ' 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.999 --rc genhtml_branch_coverage=1 00:05:08.999 --rc genhtml_function_coverage=1 00:05:08.999 --rc genhtml_legend=1 00:05:08.999 --rc geninfo_all_blocks=1 00:05:08.999 --rc geninfo_unexecuted_blocks=1 00:05:08.999 00:05:08.999 ' 00:05:08.999 15:22:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.999 15:22:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2886094 00:05:08.999 15:22:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.999 15:22:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2886094 00:05:08.999 15:22:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2886094 ']' 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.999 15:22:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.999 [2024-10-01 15:22:48.445684] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:08.999 [2024-10-01 15:22:48.445756] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886094 ] 00:05:09.259 [2024-10-01 15:22:48.480283] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:09.259 [2024-10-01 15:22:48.531295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.259 [2024-10-01 15:22:48.583388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.259 [2024-10-01 15:22:48.583555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.259 [2024-10-01 15:22:48.583723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.259 [2024-10-01 15:22:48.583723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.828 15:22:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.828 15:22:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:09.828 15:22:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.828 15:22:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.828 15:22:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.828 [2024-10-01 15:22:49.266169] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:09.828 [2024-10-01 15:22:49.266187] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.828 [2024-10-01 15:22:49.266196] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.828 [2024-10-01 15:22:49.266203] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.828 [2024-10-01 15:22:49.266208] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.829 15:22:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.829 15:22:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.829 15:22:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.829 15:22:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.088 [2024-10-01 15:22:49.321330] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.088 15:22:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.088 15:22:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.089 15:22:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.089 15:22:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 ************************************ 00:05:10.089 START TEST scheduler_create_thread 00:05:10.089 ************************************ 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 2 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 3 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 4 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 5 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 6 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 7 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 8 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 9 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.089 15:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.485 10 00:05:11.485 15:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.485 15:22:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.485 15:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.485 15:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.424 15:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.424 15:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.424 15:22:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.424 15:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.424 15:22:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.994 15:22:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.994 15:22:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.994 15:22:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.994 15:22:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.563 15:22:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.563 15:22:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.563 15:22:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.563 15:22:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.563 15:22:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.132 15:22:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.132 00:05:14.132 real 0m4.216s 00:05:14.132 user 0m0.024s 00:05:14.132 sys 0m0.008s 00:05:14.132 15:22:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.132 15:22:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.132 ************************************ 00:05:14.132 END TEST scheduler_create_thread 00:05:14.132 ************************************ 00:05:14.392 15:22:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.392 15:22:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2886094 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2886094 ']' 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2886094 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2886094 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2886094' 00:05:14.392 killing process with pid 2886094 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2886094 00:05:14.392 15:22:53 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2886094 00:05:14.652 [2024-10-01 15:22:53.857399] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.652 00:05:14.652 real 0m5.836s 00:05:14.652 user 0m13.420s 00:05:14.652 sys 0m0.435s 00:05:14.652 15:22:54 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.652 15:22:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.652 ************************************ 00:05:14.652 END TEST event_scheduler 00:05:14.652 ************************************ 00:05:14.652 15:22:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.652 15:22:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.652 15:22:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.652 15:22:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.652 15:22:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.912 ************************************ 00:05:14.912 START TEST app_repeat 00:05:14.912 ************************************ 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2887193 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2887193' 00:05:14.912 Process app_repeat pid: 2887193 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.912 spdk_app_start Round 0 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2887193 /var/tmp/spdk-nbd.sock 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2887193 ']' 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.912 [2024-10-01 15:22:54.157650] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:14.912 [2024-10-01 15:22:54.157730] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887193 ] 00:05:14.912 [2024-10-01 15:22:54.190513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:14.912 [2024-10-01 15:22:54.240186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.912 [2024-10-01 15:22:54.270017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.912 [2024-10-01 15:22:54.270182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.912 15:22:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:14.912 15:22:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.172 Malloc0 00:05:15.172 15:22:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.433 Malloc1 00:05:15.433 15:22:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.433 15:22:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.693 /dev/nbd0 00:05:15.693 15:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.693 15:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.693 1+0 records in 00:05:15.693 1+0 records out 00:05:15.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303926 s, 13.5 MB/s 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.693 15:22:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.694 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.694 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.694 15:22:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.954 /dev/nbd1 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.954 1+0 records in 00:05:15.954 1+0 records out 00:05:15.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283426 s, 14.5 MB/s 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.954 15:22:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.954 { 00:05:15.954 "nbd_device": "/dev/nbd0", 00:05:15.954 "bdev_name": "Malloc0" 00:05:15.954 }, 00:05:15.954 { 00:05:15.954 "nbd_device": "/dev/nbd1", 00:05:15.954 "bdev_name": "Malloc1" 00:05:15.954 } 00:05:15.954 ]' 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.954 { 00:05:15.954 "nbd_device": "/dev/nbd0", 00:05:15.954 "bdev_name": "Malloc0" 00:05:15.954 }, 00:05:15.954 { 00:05:15.954 "nbd_device": "/dev/nbd1", 00:05:15.954 "bdev_name": "Malloc1" 00:05:15.954 } 00:05:15.954 ]' 00:05:15.954 15:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.214 /dev/nbd1' 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.214 /dev/nbd1' 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.214 15:22:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.215 256+0 records in 00:05:16.215 256+0 records out 00:05:16.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127165 s, 82.5 MB/s 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.215 256+0 records in 00:05:16.215 256+0 records out 00:05:16.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120471 s, 87.0 MB/s 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.215 256+0 records in 00:05:16.215 256+0 records out 00:05:16.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130686 s, 80.2 MB/s 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.215 15:22:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.474 15:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.734 15:22:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.734 15:22:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.993 15:22:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.993 [2024-10-01 15:22:56.425628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.253 [2024-10-01 15:22:56.452479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.253 [2024-10-01 15:22:56.452479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.253 [2024-10-01 15:22:56.481621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.253 [2024-10-01 15:22:56.481652] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.550 15:22:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.550 15:22:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.550 spdk_app_start Round 1 00:05:20.550 15:22:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2887193 /var/tmp/spdk-nbd.sock 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2887193 ']' 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.550 15:22:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:20.550 15:22:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.550 Malloc0 00:05:20.550 15:22:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.550 Malloc1 00:05:20.550 15:22:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.550 15:22:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.811 /dev/nbd0 00:05:20.811 15:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.811 15:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.811 1+0 records in 00:05:20.811 1+0 records out 00:05:20.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278721 s, 14.7 MB/s 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:20.811 15:23:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:20.811 15:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.811 15:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.811 15:23:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.072 /dev/nbd1 00:05:21.072 15:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.072 15:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.072 1+0 records in 00:05:21.072 1+0 records out 00:05:21.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002477 s, 16.5 MB/s 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:21.072 15:23:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:21.072 15:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.072 15:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.072 15:23:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.072 15:23:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.072 15:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.333 15:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.333 { 00:05:21.333 "nbd_device": "/dev/nbd0", 00:05:21.333 "bdev_name": "Malloc0" 00:05:21.333 }, 00:05:21.333 { 00:05:21.333 "nbd_device": "/dev/nbd1", 00:05:21.333 "bdev_name": "Malloc1" 00:05:21.333 } 00:05:21.333 ]' 00:05:21.333 15:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.333 { 00:05:21.333 "nbd_device": "/dev/nbd0", 00:05:21.333 "bdev_name": "Malloc0" 00:05:21.333 }, 00:05:21.333 { 00:05:21.333 "nbd_device": "/dev/nbd1", 00:05:21.333 "bdev_name": "Malloc1" 00:05:21.333 } 00:05:21.333 ]' 00:05:21.333 15:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.333 15:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.333 /dev/nbd1' 00:05:21.333 15:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.333 /dev/nbd1' 00:05:21.333 15:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.333 15:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.334 256+0 records in 00:05:21.334 256+0 records out 00:05:21.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127348 s, 82.3 MB/s 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.334 256+0 records in 00:05:21.334 256+0 records out 00:05:21.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120209 s, 87.2 MB/s 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.334 256+0 records in 00:05:21.334 256+0 records out 00:05:21.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128925 s, 81.3 MB/s 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.334 15:23:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.595 15:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.595 15:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.595 15:23:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.596 15:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.596 15:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.596 15:23:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.596 15:23:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.596 15:23:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.596 15:23:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.596 15:23:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.856 15:23:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.857 15:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.118 15:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.118 15:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.118 15:23:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.118 15:23:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.118 15:23:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.118 15:23:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.118 15:23:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.118 15:23:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.379 [2024-10-01 15:23:01.594501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.379 [2024-10-01 15:23:01.621639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.379 [2024-10-01 15:23:01.621639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.379 [2024-10-01 15:23:01.651379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.379 [2024-10-01 15:23:01.651411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.683 15:23:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.683 15:23:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.683 spdk_app_start Round 2 00:05:25.683 15:23:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2887193 /var/tmp/spdk-nbd.sock 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2887193 ']' 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.683 15:23:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:25.683 15:23:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.683 Malloc0 00:05:25.683 15:23:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.683 Malloc1 00:05:25.683 15:23:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.683 15:23:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.944 /dev/nbd0 00:05:25.944 15:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.944 15:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.944 1+0 records in 00:05:25.944 1+0 records out 00:05:25.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278015 s, 14.7 MB/s 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.944 15:23:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.944 15:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.944 15:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.944 15:23:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.204 /dev/nbd1 00:05:26.204 15:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.204 15:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.204 1+0 records in 00:05:26.204 1+0 records out 00:05:26.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232415 s, 17.6 MB/s 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:26.204 15:23:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:26.204 15:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.204 15:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.204 15:23:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.204 15:23:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.204 15:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.464 { 00:05:26.464 "nbd_device": "/dev/nbd0", 00:05:26.464 "bdev_name": "Malloc0" 00:05:26.464 }, 00:05:26.464 { 00:05:26.464 "nbd_device": "/dev/nbd1", 00:05:26.464 "bdev_name": "Malloc1" 00:05:26.464 } 00:05:26.464 ]' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.464 { 00:05:26.464 "nbd_device": "/dev/nbd0", 00:05:26.464 "bdev_name": "Malloc0" 00:05:26.464 }, 00:05:26.464 { 00:05:26.464 "nbd_device": "/dev/nbd1", 00:05:26.464 "bdev_name": "Malloc1" 00:05:26.464 } 00:05:26.464 ]' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.464 /dev/nbd1' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.464 /dev/nbd1' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.464 256+0 records in 00:05:26.464 256+0 records out 00:05:26.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123088 s, 85.2 MB/s 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.464 256+0 records in 00:05:26.464 256+0 records out 00:05:26.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123636 s, 84.8 MB/s 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.464 256+0 records in 00:05:26.464 256+0 records out 00:05:26.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129493 s, 81.0 MB/s 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.464 15:23:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.723 15:23:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.983 15:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.983 15:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.983 15:23:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.984 15:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.244 15:23:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.244 15:23:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.244 15:23:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.505 [2024-10-01 15:23:06.741799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.505 [2024-10-01 15:23:06.768573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.505 [2024-10-01 15:23:06.768573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.505 [2024-10-01 15:23:06.797792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.505 [2024-10-01 15:23:06.797825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.805 15:23:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2887193 /var/tmp/spdk-nbd.sock 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2887193 ']' 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.805 15:23:09 event.app_repeat -- event/event.sh@39 -- # killprocess 2887193 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2887193 ']' 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2887193 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2887193 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2887193' 00:05:30.805 killing process with pid 2887193 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2887193 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2887193 00:05:30.805 spdk_app_start is called in Round 0. 00:05:30.805 Shutdown signal received, stop current app iteration 00:05:30.805 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:05:30.805 spdk_app_start is called in Round 1. 00:05:30.805 Shutdown signal received, stop current app iteration 00:05:30.805 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:05:30.805 spdk_app_start is called in Round 2. 00:05:30.805 Shutdown signal received, stop current app iteration 00:05:30.805 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:05:30.805 spdk_app_start is called in Round 3. 00:05:30.805 Shutdown signal received, stop current app iteration 00:05:30.805 15:23:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:30.805 15:23:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:30.805 00:05:30.805 real 0m15.867s 00:05:30.805 user 0m34.916s 00:05:30.805 sys 0m2.254s 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.805 15:23:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.805 ************************************ 00:05:30.805 END TEST app_repeat 00:05:30.805 ************************************ 00:05:30.805 15:23:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:30.805 15:23:10 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.805 15:23:10 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.805 15:23:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.805 15:23:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.805 ************************************ 00:05:30.805 START TEST cpu_locks 00:05:30.805 ************************************ 00:05:30.805 15:23:10 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.805 * Looking for test storage... 00:05:30.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.805 15:23:10 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.805 15:23:10 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.805 15:23:10 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.805 15:23:10 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.805 15:23:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.806 15:23:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.067 15:23:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.067 15:23:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.067 15:23:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.067 15:23:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.067 --rc genhtml_branch_coverage=1 00:05:31.067 --rc genhtml_function_coverage=1 00:05:31.067 --rc genhtml_legend=1 00:05:31.067 --rc geninfo_all_blocks=1 00:05:31.067 --rc geninfo_unexecuted_blocks=1 00:05:31.067 00:05:31.067 ' 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.067 --rc genhtml_branch_coverage=1 00:05:31.067 --rc genhtml_function_coverage=1 00:05:31.067 --rc genhtml_legend=1 00:05:31.067 --rc geninfo_all_blocks=1 00:05:31.067 --rc geninfo_unexecuted_blocks=1 00:05:31.067 00:05:31.067 ' 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.067 --rc genhtml_branch_coverage=1 00:05:31.067 --rc genhtml_function_coverage=1 00:05:31.067 --rc genhtml_legend=1 00:05:31.067 --rc geninfo_all_blocks=1 00:05:31.067 --rc geninfo_unexecuted_blocks=1 00:05:31.067 00:05:31.067 ' 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.067 --rc genhtml_branch_coverage=1 00:05:31.067 --rc genhtml_function_coverage=1 00:05:31.067 --rc genhtml_legend=1 00:05:31.067 --rc geninfo_all_blocks=1 00:05:31.067 --rc geninfo_unexecuted_blocks=1 00:05:31.067 00:05:31.067 ' 00:05:31.067 15:23:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.067 15:23:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.067 15:23:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.067 15:23:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.067 15:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.067 ************************************ 00:05:31.067 START TEST default_locks 00:05:31.067 ************************************ 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2890673 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2890673 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2890673 ']' 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.067 15:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.067 [2024-10-01 15:23:10.364634] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:31.067 [2024-10-01 15:23:10.364703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890673 ] 00:05:31.067 [2024-10-01 15:23:10.399620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.067 [2024-10-01 15:23:10.447606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.067 [2024-10-01 15:23:10.482446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.011 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.011 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:32.011 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2890673 00:05:32.011 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2890673 00:05:32.011 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.272 lslocks: write error 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2890673 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2890673 ']' 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2890673 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2890673 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2890673' 00:05:32.272 killing process with pid 2890673 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2890673 00:05:32.272 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2890673 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2890673 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2890673 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2890673 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2890673 ']' 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2890673) - No such process 00:05:32.534 ERROR: process (pid: 2890673) is no longer running 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.534 00:05:32.534 real 0m1.539s 00:05:32.534 user 0m1.656s 00:05:32.534 sys 0m0.548s 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.534 15:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 ************************************ 00:05:32.534 END TEST default_locks 00:05:32.534 ************************************ 00:05:32.534 15:23:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:32.534 15:23:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.534 15:23:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.534 15:23:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 ************************************ 00:05:32.534 START TEST default_locks_via_rpc 00:05:32.534 ************************************ 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2891000 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2891000 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2891000 ']' 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.534 15:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 [2024-10-01 15:23:11.974308] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:32.534 [2024-10-01 15:23:11.974367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891000 ] 00:05:32.796 [2024-10-01 15:23:12.007961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.796 [2024-10-01 15:23:12.053583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.796 [2024-10-01 15:23:12.087728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2891000 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2891000 00:05:33.369 15:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2891000 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2891000 ']' 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2891000 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891000 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891000' 00:05:33.940 killing process with pid 2891000 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2891000 00:05:33.940 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2891000 00:05:34.202 00:05:34.202 real 0m1.508s 00:05:34.202 user 0m1.605s 00:05:34.202 sys 0m0.544s 00:05:34.202 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.202 15:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.202 ************************************ 00:05:34.202 END TEST default_locks_via_rpc 00:05:34.202 ************************************ 00:05:34.202 15:23:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.202 15:23:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.202 15:23:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.202 15:23:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.202 ************************************ 00:05:34.202 START TEST non_locking_app_on_locked_coremask 00:05:34.202 ************************************ 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2891293 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2891293 /var/tmp/spdk.sock 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2891293 ']' 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.202 15:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.202 [2024-10-01 15:23:13.556563] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:34.202 [2024-10-01 15:23:13.556622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891293 ] 00:05:34.202 [2024-10-01 15:23:13.589888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.202 [2024-10-01 15:23:13.636766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.463 [2024-10-01 15:23:13.669162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2891532 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2891532 /var/tmp/spdk2.sock 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2891532 ']' 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.034 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.034 [2024-10-01 15:23:14.395293] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:35.034 [2024-10-01 15:23:14.395349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891532 ] 00:05:35.034 [2024-10-01 15:23:14.424645] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.034 [2024-10-01 15:23:14.465482] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.034 [2024-10-01 15:23:14.465501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.295 [2024-10-01 15:23:14.526182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.866 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.866 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:35.866 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2891293 00:05:35.866 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2891293 00:05:35.866 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.439 lslocks: write error 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2891293 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2891293 ']' 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2891293 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891293 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891293' 00:05:36.439 killing process with pid 2891293 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2891293 00:05:36.439 15:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2891293 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2891532 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2891532 ']' 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2891532 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891532 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891532' 00:05:37.039 killing process with pid 2891532 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2891532 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2891532 00:05:37.039 00:05:37.039 real 0m2.963s 00:05:37.039 user 0m3.258s 00:05:37.039 sys 0m0.957s 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.039 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.039 ************************************ 00:05:37.039 END TEST non_locking_app_on_locked_coremask 00:05:37.039 ************************************ 00:05:37.300 15:23:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.300 15:23:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.300 15:23:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.300 15:23:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.300 ************************************ 00:05:37.300 START TEST locking_app_on_unlocked_coremask 00:05:37.300 ************************************ 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2891909 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2891909 /var/tmp/spdk.sock 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2891909 ']' 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.300 15:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.300 [2024-10-01 15:23:16.590820] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:37.300 [2024-10-01 15:23:16.590876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891909 ] 00:05:37.300 [2024-10-01 15:23:16.623428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.300 [2024-10-01 15:23:16.669498] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.300 [2024-10-01 15:23:16.669522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.300 [2024-10-01 15:23:16.702244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2892235 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2892235 /var/tmp/spdk2.sock 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2892235 ']' 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.245 15:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.245 [2024-10-01 15:23:17.435052] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:38.245 [2024-10-01 15:23:17.435108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892235 ] 00:05:38.245 [2024-10-01 15:23:17.467253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:38.245 [2024-10-01 15:23:17.509116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.245 [2024-10-01 15:23:17.565830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.817 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.817 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:38.817 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2892235 00:05:38.817 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2892235 00:05:38.817 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.390 lslocks: write error 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2891909 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2891909 ']' 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2891909 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891909 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891909' 00:05:39.390 killing process with pid 2891909 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2891909 00:05:39.390 15:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2891909 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2892235 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2892235 ']' 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2892235 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2892235 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2892235' 00:05:39.961 killing process with pid 2892235 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2892235 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2892235 00:05:39.961 00:05:39.961 real 0m2.826s 00:05:39.961 user 0m3.127s 00:05:39.961 sys 0m0.885s 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.961 15:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.961 ************************************ 00:05:39.961 END TEST locking_app_on_unlocked_coremask 00:05:39.961 ************************************ 00:05:39.961 15:23:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:39.961 15:23:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.961 15:23:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.961 15:23:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.221 ************************************ 00:05:40.221 START TEST locking_app_on_locked_coremask 00:05:40.221 ************************************ 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2892612 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2892612 /var/tmp/spdk.sock 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2892612 ']' 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.221 15:23:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.221 [2024-10-01 15:23:19.495601] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:40.222 [2024-10-01 15:23:19.495648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892612 ] 00:05:40.222 [2024-10-01 15:23:19.525652] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.222 [2024-10-01 15:23:19.572385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.222 [2024-10-01 15:23:19.600576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2892706 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2892706 /var/tmp/spdk2.sock 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2892706 /var/tmp/spdk2.sock 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2892706 /var/tmp/spdk2.sock 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2892706 ']' 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.164 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.164 [2024-10-01 15:23:20.342560] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:41.164 [2024-10-01 15:23:20.342616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892706 ] 00:05:41.164 [2024-10-01 15:23:20.374318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:41.164 [2024-10-01 15:23:20.416873] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2892612 has claimed it. 00:05:41.164 [2024-10-01 15:23:20.416907] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2892706) - No such process 00:05:41.736 ERROR: process (pid: 2892706) is no longer running 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2892612 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2892612 00:05:41.736 15:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.308 lslocks: write error 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2892612 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2892612 ']' 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2892612 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2892612 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2892612' 00:05:42.309 killing process with pid 2892612 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2892612 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2892612 00:05:42.309 00:05:42.309 real 0m2.312s 00:05:42.309 user 0m2.598s 00:05:42.309 sys 0m0.669s 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.309 15:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.309 ************************************ 00:05:42.309 END TEST locking_app_on_locked_coremask 00:05:42.309 ************************************ 00:05:42.571 15:23:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:42.571 15:23:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.571 15:23:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.571 15:23:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.571 ************************************ 00:05:42.571 START TEST locking_overlapped_coremask 00:05:42.571 ************************************ 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2893008 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2893008 /var/tmp/spdk.sock 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2893008 ']' 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.571 15:23:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.571 [2024-10-01 15:23:21.895279] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:42.571 [2024-10-01 15:23:21.895346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893008 ] 00:05:42.571 [2024-10-01 15:23:21.929642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:42.571 [2024-10-01 15:23:21.976269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.571 [2024-10-01 15:23:22.020151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.571 [2024-10-01 15:23:22.020307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.571 [2024-10-01 15:23:22.020309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2893324 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2893324 /var/tmp/spdk2.sock 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2893324 /var/tmp/spdk2.sock 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2893324 /var/tmp/spdk2.sock 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2893324 ']' 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.512 15:23:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.512 [2024-10-01 15:23:22.749010] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:43.512 [2024-10-01 15:23:22.749065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893324 ] 00:05:43.512 [2024-10-01 15:23:22.780647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.512 [2024-10-01 15:23:22.842976] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2893008 has claimed it. 00:05:43.512 [2024-10-01 15:23:22.843011] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2893324) - No such process 00:05:44.082 ERROR: process (pid: 2893324) is no longer running 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2893008 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2893008 ']' 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2893008 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2893008 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2893008' 00:05:44.082 killing process with pid 2893008 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2893008 00:05:44.082 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2893008 00:05:44.342 00:05:44.342 real 0m1.772s 00:05:44.342 user 0m5.033s 00:05:44.342 sys 0m0.424s 00:05:44.342 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.342 15:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.342 ************************************ 00:05:44.342 END TEST locking_overlapped_coremask 00:05:44.342 ************************************ 00:05:44.342 15:23:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.342 15:23:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.342 15:23:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.342 15:23:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.342 ************************************ 00:05:44.342 START TEST locking_overlapped_coremask_via_rpc 00:05:44.342 ************************************ 00:05:44.342 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:44.342 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2893438 00:05:44.342 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2893438 /var/tmp/spdk.sock 00:05:44.342 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.342 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2893438 ']' 00:05:44.343 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.343 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.343 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.343 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.343 15:23:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.343 [2024-10-01 15:23:23.732482] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:44.343 [2024-10-01 15:23:23.732539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893438 ] 00:05:44.343 [2024-10-01 15:23:23.766030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:44.603 [2024-10-01 15:23:23.813898] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.603 [2024-10-01 15:23:23.813920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.603 [2024-10-01 15:23:23.846982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.603 [2024-10-01 15:23:23.847322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.603 [2024-10-01 15:23:23.847322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.171 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.171 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.171 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2893701 00:05:45.171 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2893701 /var/tmp/spdk2.sock 00:05:45.172 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2893701 ']' 00:05:45.172 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:45.172 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.172 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.172 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.172 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.172 15:23:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.172 [2024-10-01 15:23:24.591574] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:45.172 [2024-10-01 15:23:24.591627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893701 ] 00:05:45.172 [2024-10-01 15:23:24.623680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.432 [2024-10-01 15:23:24.681856] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.432 [2024-10-01 15:23:24.681878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.432 [2024-10-01 15:23:24.749985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.432 [2024-10-01 15:23:24.750141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.432 [2024-10-01 15:23:24.750143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.001 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.001 [2024-10-01 15:23:25.385975] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2893438 has claimed it. 00:05:46.002 request: 00:05:46.002 { 00:05:46.002 "method": "framework_enable_cpumask_locks", 00:05:46.002 "req_id": 1 00:05:46.002 } 00:05:46.002 Got JSON-RPC error response 00:05:46.002 response: 00:05:46.002 { 00:05:46.002 "code": -32603, 00:05:46.002 "message": "Failed to claim CPU core: 2" 00:05:46.002 } 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2893438 /var/tmp/spdk.sock 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2893438 ']' 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.002 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2893701 /var/tmp/spdk2.sock 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2893701 ']' 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.262 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.523 00:05:46.523 real 0m2.087s 00:05:46.523 user 0m0.873s 00:05:46.523 sys 0m0.138s 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.523 15:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.523 ************************************ 00:05:46.523 END TEST locking_overlapped_coremask_via_rpc 00:05:46.523 ************************************ 00:05:46.523 15:23:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.523 15:23:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2893438 ]] 00:05:46.523 15:23:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2893438 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2893438 ']' 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2893438 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2893438 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2893438' 00:05:46.523 killing process with pid 2893438 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2893438 00:05:46.523 15:23:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2893438 00:05:46.782 15:23:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2893701 ]] 00:05:46.782 15:23:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2893701 00:05:46.782 15:23:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2893701 ']' 00:05:46.782 15:23:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2893701 00:05:46.782 15:23:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:46.782 15:23:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.782 15:23:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2893701 00:05:46.782 15:23:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:46.783 15:23:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:46.783 15:23:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2893701' 00:05:46.783 killing process with pid 2893701 00:05:46.783 15:23:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2893701 00:05:46.783 15:23:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2893701 00:05:47.043 15:23:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.043 15:23:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:47.043 15:23:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2893438 ]] 00:05:47.043 15:23:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2893438 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2893438 ']' 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2893438 00:05:47.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2893438) - No such process 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2893438 is not found' 00:05:47.043 Process with pid 2893438 is not found 00:05:47.043 15:23:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2893701 ]] 00:05:47.043 15:23:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2893701 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2893701 ']' 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2893701 00:05:47.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2893701) - No such process 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2893701 is not found' 00:05:47.043 Process with pid 2893701 is not found 00:05:47.043 15:23:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.043 00:05:47.043 real 0m16.264s 00:05:47.043 user 0m28.207s 00:05:47.043 sys 0m5.144s 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.043 15:23:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 ************************************ 00:05:47.043 END TEST cpu_locks 00:05:47.043 ************************************ 00:05:47.043 00:05:47.043 real 0m42.178s 00:05:47.043 user 1m23.079s 00:05:47.043 sys 0m8.534s 00:05:47.043 15:23:26 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.043 15:23:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 ************************************ 00:05:47.043 END TEST event 00:05:47.043 ************************************ 00:05:47.043 15:23:26 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:47.043 15:23:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.043 15:23:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.043 15:23:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 ************************************ 00:05:47.043 START TEST thread 00:05:47.043 ************************************ 00:05:47.043 15:23:26 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:47.303 * Looking for test storage... 00:05:47.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.303 15:23:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.303 15:23:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.303 15:23:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.303 15:23:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.303 15:23:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.303 15:23:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.303 15:23:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.303 15:23:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.303 15:23:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.303 15:23:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.303 15:23:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.303 15:23:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:47.303 15:23:26 thread -- scripts/common.sh@345 -- # : 1 00:05:47.303 15:23:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.303 15:23:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.303 15:23:26 thread -- scripts/common.sh@365 -- # decimal 1 00:05:47.303 15:23:26 thread -- scripts/common.sh@353 -- # local d=1 00:05:47.303 15:23:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.303 15:23:26 thread -- scripts/common.sh@355 -- # echo 1 00:05:47.303 15:23:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.303 15:23:26 thread -- scripts/common.sh@366 -- # decimal 2 00:05:47.303 15:23:26 thread -- scripts/common.sh@353 -- # local d=2 00:05:47.303 15:23:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.303 15:23:26 thread -- scripts/common.sh@355 -- # echo 2 00:05:47.303 15:23:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.303 15:23:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.303 15:23:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.303 15:23:26 thread -- scripts/common.sh@368 -- # return 0 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.303 --rc genhtml_branch_coverage=1 00:05:47.303 --rc genhtml_function_coverage=1 00:05:47.303 --rc genhtml_legend=1 00:05:47.303 --rc geninfo_all_blocks=1 00:05:47.303 --rc geninfo_unexecuted_blocks=1 00:05:47.303 00:05:47.303 ' 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.303 --rc genhtml_branch_coverage=1 00:05:47.303 --rc genhtml_function_coverage=1 00:05:47.303 --rc genhtml_legend=1 00:05:47.303 --rc geninfo_all_blocks=1 00:05:47.303 --rc geninfo_unexecuted_blocks=1 00:05:47.303 00:05:47.303 ' 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.303 --rc genhtml_branch_coverage=1 00:05:47.303 --rc genhtml_function_coverage=1 00:05:47.303 --rc genhtml_legend=1 00:05:47.303 --rc geninfo_all_blocks=1 00:05:47.303 --rc geninfo_unexecuted_blocks=1 00:05:47.303 00:05:47.303 ' 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.303 --rc genhtml_branch_coverage=1 00:05:47.303 --rc genhtml_function_coverage=1 00:05:47.303 --rc genhtml_legend=1 00:05:47.303 --rc geninfo_all_blocks=1 00:05:47.303 --rc geninfo_unexecuted_blocks=1 00:05:47.303 00:05:47.303 ' 00:05:47.303 15:23:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.303 15:23:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.303 ************************************ 00:05:47.303 START TEST thread_poller_perf 00:05:47.303 ************************************ 00:05:47.303 15:23:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.303 [2024-10-01 15:23:26.703245] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:47.303 [2024-10-01 15:23:26.703330] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894151 ] 00:05:47.303 [2024-10-01 15:23:26.738167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:47.563 [2024-10-01 15:23:26.786378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.563 [2024-10-01 15:23:26.816316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.563 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:48.503 ====================================== 00:05:48.503 busy:2405683146 (cyc) 00:05:48.503 total_run_count: 418000 00:05:48.503 tsc_hz: 2400000000 (cyc) 00:05:48.503 ====================================== 00:05:48.503 poller_cost: 5755 (cyc), 2397 (nsec) 00:05:48.503 00:05:48.503 real 0m1.176s 00:05:48.503 user 0m1.080s 00:05:48.503 sys 0m0.092s 00:05:48.503 15:23:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.503 15:23:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.503 ************************************ 00:05:48.503 END TEST thread_poller_perf 00:05:48.503 ************************************ 00:05:48.503 15:23:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.503 15:23:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:48.503 15:23:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.503 15:23:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.503 ************************************ 00:05:48.503 START TEST thread_poller_perf 00:05:48.503 ************************************ 00:05:48.503 15:23:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.763 [2024-10-01 15:23:27.957926] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:48.763 [2024-10-01 15:23:27.958013] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894500 ] 00:05:48.763 [2024-10-01 15:23:27.993114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.763 [2024-10-01 15:23:28.040691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.763 [2024-10-01 15:23:28.070372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.763 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.702 ====================================== 00:05:49.702 busy:2401330240 (cyc) 00:05:49.702 total_run_count: 5554000 00:05:49.702 tsc_hz: 2400000000 (cyc) 00:05:49.702 ====================================== 00:05:49.702 poller_cost: 432 (cyc), 180 (nsec) 00:05:49.702 00:05:49.702 real 0m1.172s 00:05:49.702 user 0m1.084s 00:05:49.702 sys 0m0.085s 00:05:49.702 15:23:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.702 15:23:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 ************************************ 00:05:49.702 END TEST thread_poller_perf 00:05:49.702 ************************************ 00:05:49.702 15:23:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:49.702 00:05:49.702 real 0m2.702s 00:05:49.702 user 0m2.333s 00:05:49.702 sys 0m0.385s 00:05:49.702 15:23:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.702 15:23:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 ************************************ 00:05:49.702 END TEST thread 00:05:49.702 ************************************ 00:05:49.962 15:23:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:49.962 15:23:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.962 15:23:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.962 15:23:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.962 15:23:29 -- common/autotest_common.sh@10 -- # set +x 00:05:49.962 ************************************ 00:05:49.962 START TEST app_cmdline 00:05:49.962 ************************************ 00:05:49.962 15:23:29 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.962 * Looking for test storage... 00:05:49.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:49.962 15:23:29 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.962 15:23:29 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.962 15:23:29 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.962 15:23:29 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:49.962 15:23:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.222 15:23:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.222 --rc genhtml_branch_coverage=1 00:05:50.222 --rc genhtml_function_coverage=1 00:05:50.222 --rc genhtml_legend=1 00:05:50.222 --rc geninfo_all_blocks=1 00:05:50.222 --rc geninfo_unexecuted_blocks=1 00:05:50.222 00:05:50.222 ' 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.222 --rc genhtml_branch_coverage=1 00:05:50.222 --rc genhtml_function_coverage=1 00:05:50.222 --rc genhtml_legend=1 00:05:50.222 --rc geninfo_all_blocks=1 00:05:50.222 --rc geninfo_unexecuted_blocks=1 00:05:50.222 00:05:50.222 ' 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.222 --rc genhtml_branch_coverage=1 00:05:50.222 --rc genhtml_function_coverage=1 00:05:50.222 --rc genhtml_legend=1 00:05:50.222 --rc geninfo_all_blocks=1 00:05:50.222 --rc geninfo_unexecuted_blocks=1 00:05:50.222 00:05:50.222 ' 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.222 --rc genhtml_branch_coverage=1 00:05:50.222 --rc genhtml_function_coverage=1 00:05:50.222 --rc genhtml_legend=1 00:05:50.222 --rc geninfo_all_blocks=1 00:05:50.222 --rc geninfo_unexecuted_blocks=1 00:05:50.222 00:05:50.222 ' 00:05:50.222 15:23:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:50.222 15:23:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2894900 00:05:50.222 15:23:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2894900 00:05:50.222 15:23:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2894900 ']' 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.222 15:23:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.222 [2024-10-01 15:23:29.486756] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:50.222 [2024-10-01 15:23:29.486831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894900 ] 00:05:50.222 [2024-10-01 15:23:29.521247] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.222 [2024-10-01 15:23:29.566966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.222 [2024-10-01 15:23:29.609304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.160 15:23:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.160 15:23:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:51.160 { 00:05:51.160 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:05:51.160 "fields": { 00:05:51.160 "major": 25, 00:05:51.160 "minor": 1, 00:05:51.160 "patch": 0, 00:05:51.160 "suffix": "-pre", 00:05:51.160 "commit": "09cc66129" 00:05:51.160 } 00:05:51.160 } 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:51.160 15:23:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:51.160 15:23:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.160 15:23:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:51.160 15:23:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.160 15:23:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:51.160 15:23:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:51.161 15:23:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.421 request: 00:05:51.421 { 00:05:51.421 "method": "env_dpdk_get_mem_stats", 00:05:51.421 "req_id": 1 00:05:51.421 } 00:05:51.421 Got JSON-RPC error response 00:05:51.421 response: 00:05:51.421 { 00:05:51.421 "code": -32601, 00:05:51.421 "message": "Method not found" 00:05:51.421 } 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.421 15:23:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2894900 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2894900 ']' 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2894900 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894900 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894900' 00:05:51.421 killing process with pid 2894900 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@969 -- # kill 2894900 00:05:51.421 15:23:30 app_cmdline -- common/autotest_common.sh@974 -- # wait 2894900 00:05:51.682 00:05:51.682 real 0m1.717s 00:05:51.682 user 0m2.047s 00:05:51.682 sys 0m0.476s 00:05:51.682 15:23:30 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.682 15:23:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.682 ************************************ 00:05:51.682 END TEST app_cmdline 00:05:51.682 ************************************ 00:05:51.682 15:23:30 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.682 15:23:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.682 15:23:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.682 15:23:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.682 ************************************ 00:05:51.682 START TEST version 00:05:51.682 ************************************ 00:05:51.682 15:23:31 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.682 * Looking for test storage... 00:05:51.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:51.682 15:23:31 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.682 15:23:31 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.682 15:23:31 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.943 15:23:31 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.943 15:23:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.943 15:23:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.943 15:23:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.943 15:23:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.943 15:23:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.943 15:23:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.943 15:23:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.943 15:23:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.943 15:23:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.943 15:23:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.943 15:23:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.943 15:23:31 version -- scripts/common.sh@344 -- # case "$op" in 00:05:51.943 15:23:31 version -- scripts/common.sh@345 -- # : 1 00:05:51.943 15:23:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.943 15:23:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.943 15:23:31 version -- scripts/common.sh@365 -- # decimal 1 00:05:51.943 15:23:31 version -- scripts/common.sh@353 -- # local d=1 00:05:51.943 15:23:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.943 15:23:31 version -- scripts/common.sh@355 -- # echo 1 00:05:51.943 15:23:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.943 15:23:31 version -- scripts/common.sh@366 -- # decimal 2 00:05:51.943 15:23:31 version -- scripts/common.sh@353 -- # local d=2 00:05:51.943 15:23:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.943 15:23:31 version -- scripts/common.sh@355 -- # echo 2 00:05:51.943 15:23:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.943 15:23:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.943 15:23:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.943 15:23:31 version -- scripts/common.sh@368 -- # return 0 00:05:51.943 15:23:31 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.943 15:23:31 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.943 --rc genhtml_branch_coverage=1 00:05:51.943 --rc genhtml_function_coverage=1 00:05:51.943 --rc genhtml_legend=1 00:05:51.943 --rc geninfo_all_blocks=1 00:05:51.943 --rc geninfo_unexecuted_blocks=1 00:05:51.943 00:05:51.943 ' 00:05:51.943 15:23:31 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.943 --rc genhtml_branch_coverage=1 00:05:51.943 --rc genhtml_function_coverage=1 00:05:51.943 --rc genhtml_legend=1 00:05:51.943 --rc geninfo_all_blocks=1 00:05:51.943 --rc geninfo_unexecuted_blocks=1 00:05:51.943 00:05:51.943 ' 00:05:51.943 15:23:31 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.943 --rc genhtml_branch_coverage=1 00:05:51.943 --rc genhtml_function_coverage=1 00:05:51.943 --rc genhtml_legend=1 00:05:51.943 --rc geninfo_all_blocks=1 00:05:51.943 --rc geninfo_unexecuted_blocks=1 00:05:51.943 00:05:51.943 ' 00:05:51.943 15:23:31 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.943 --rc genhtml_branch_coverage=1 00:05:51.943 --rc genhtml_function_coverage=1 00:05:51.943 --rc genhtml_legend=1 00:05:51.943 --rc geninfo_all_blocks=1 00:05:51.943 --rc geninfo_unexecuted_blocks=1 00:05:51.943 00:05:51.943 ' 00:05:51.943 15:23:31 version -- app/version.sh@17 -- # get_header_version major 00:05:51.943 15:23:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # cut -f2 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.943 15:23:31 version -- app/version.sh@17 -- # major=25 00:05:51.943 15:23:31 version -- app/version.sh@18 -- # get_header_version minor 00:05:51.943 15:23:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # cut -f2 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.943 15:23:31 version -- app/version.sh@18 -- # minor=1 00:05:51.943 15:23:31 version -- app/version.sh@19 -- # get_header_version patch 00:05:51.943 15:23:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # cut -f2 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.943 15:23:31 version -- app/version.sh@19 -- # patch=0 00:05:51.943 15:23:31 version -- app/version.sh@20 -- # get_header_version suffix 00:05:51.943 15:23:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # cut -f2 00:05:51.943 15:23:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.943 15:23:31 version -- app/version.sh@20 -- # suffix=-pre 00:05:51.943 15:23:31 version -- app/version.sh@22 -- # version=25.1 00:05:51.943 15:23:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:51.943 15:23:31 version -- app/version.sh@28 -- # version=25.1rc0 00:05:51.943 15:23:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:51.943 15:23:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:51.943 15:23:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:51.943 15:23:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:51.943 00:05:51.943 real 0m0.276s 00:05:51.943 user 0m0.165s 00:05:51.943 sys 0m0.158s 00:05:51.943 15:23:31 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.943 15:23:31 version -- common/autotest_common.sh@10 -- # set +x 00:05:51.943 ************************************ 00:05:51.943 END TEST version 00:05:51.943 ************************************ 00:05:51.943 15:23:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:51.943 15:23:31 -- spdk/autotest.sh@194 -- # uname -s 00:05:51.943 15:23:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:51.943 15:23:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.943 15:23:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.943 15:23:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:51.943 15:23:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.943 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.943 15:23:31 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:51.943 15:23:31 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:51.943 15:23:31 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:51.943 15:23:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:51.943 15:23:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.943 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.203 ************************************ 00:05:52.203 START TEST nvmf_tcp 00:05:52.203 ************************************ 00:05:52.203 15:23:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:52.203 * Looking for test storage... 00:05:52.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:52.203 15:23:31 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.203 15:23:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.203 15:23:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.203 15:23:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.204 15:23:31 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.204 --rc genhtml_branch_coverage=1 00:05:52.204 --rc genhtml_function_coverage=1 00:05:52.204 --rc genhtml_legend=1 00:05:52.204 --rc geninfo_all_blocks=1 00:05:52.204 --rc geninfo_unexecuted_blocks=1 00:05:52.204 00:05:52.204 ' 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.204 --rc genhtml_branch_coverage=1 00:05:52.204 --rc genhtml_function_coverage=1 00:05:52.204 --rc genhtml_legend=1 00:05:52.204 --rc geninfo_all_blocks=1 00:05:52.204 --rc geninfo_unexecuted_blocks=1 00:05:52.204 00:05:52.204 ' 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.204 --rc genhtml_branch_coverage=1 00:05:52.204 --rc genhtml_function_coverage=1 00:05:52.204 --rc genhtml_legend=1 00:05:52.204 --rc geninfo_all_blocks=1 00:05:52.204 --rc geninfo_unexecuted_blocks=1 00:05:52.204 00:05:52.204 ' 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.204 --rc genhtml_branch_coverage=1 00:05:52.204 --rc genhtml_function_coverage=1 00:05:52.204 --rc genhtml_legend=1 00:05:52.204 --rc geninfo_all_blocks=1 00:05:52.204 --rc geninfo_unexecuted_blocks=1 00:05:52.204 00:05:52.204 ' 00:05:52.204 15:23:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:52.204 15:23:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:52.204 15:23:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.204 15:23:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.204 ************************************ 00:05:52.204 START TEST nvmf_target_core 00:05:52.204 ************************************ 00:05:52.204 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:52.465 * Looking for test storage... 00:05:52.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.465 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.466 --rc genhtml_branch_coverage=1 00:05:52.466 --rc genhtml_function_coverage=1 00:05:52.466 --rc genhtml_legend=1 00:05:52.466 --rc geninfo_all_blocks=1 00:05:52.466 --rc geninfo_unexecuted_blocks=1 00:05:52.466 00:05:52.466 ' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.466 --rc genhtml_branch_coverage=1 00:05:52.466 --rc genhtml_function_coverage=1 00:05:52.466 --rc genhtml_legend=1 00:05:52.466 --rc geninfo_all_blocks=1 00:05:52.466 --rc geninfo_unexecuted_blocks=1 00:05:52.466 00:05:52.466 ' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.466 --rc genhtml_branch_coverage=1 00:05:52.466 --rc genhtml_function_coverage=1 00:05:52.466 --rc genhtml_legend=1 00:05:52.466 --rc geninfo_all_blocks=1 00:05:52.466 --rc geninfo_unexecuted_blocks=1 00:05:52.466 00:05:52.466 ' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.466 --rc genhtml_branch_coverage=1 00:05:52.466 --rc genhtml_function_coverage=1 00:05:52.466 --rc genhtml_legend=1 00:05:52.466 --rc geninfo_all_blocks=1 00:05:52.466 --rc geninfo_unexecuted_blocks=1 00:05:52.466 00:05:52.466 ' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.466 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:52.728 ************************************ 00:05:52.728 START TEST nvmf_abort 00:05:52.728 ************************************ 00:05:52.728 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:52.728 * Looking for test storage... 00:05:52.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.728 --rc genhtml_branch_coverage=1 00:05:52.728 --rc genhtml_function_coverage=1 00:05:52.728 --rc genhtml_legend=1 00:05:52.728 --rc geninfo_all_blocks=1 00:05:52.728 --rc geninfo_unexecuted_blocks=1 00:05:52.728 00:05:52.728 ' 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.728 --rc genhtml_branch_coverage=1 00:05:52.728 --rc genhtml_function_coverage=1 00:05:52.728 --rc genhtml_legend=1 00:05:52.728 --rc geninfo_all_blocks=1 00:05:52.728 --rc geninfo_unexecuted_blocks=1 00:05:52.728 00:05:52.728 ' 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.728 --rc genhtml_branch_coverage=1 00:05:52.728 --rc genhtml_function_coverage=1 00:05:52.728 --rc genhtml_legend=1 00:05:52.728 --rc geninfo_all_blocks=1 00:05:52.728 --rc geninfo_unexecuted_blocks=1 00:05:52.728 00:05:52.728 ' 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.728 --rc genhtml_branch_coverage=1 00:05:52.728 --rc genhtml_function_coverage=1 00:05:52.728 --rc genhtml_legend=1 00:05:52.728 --rc geninfo_all_blocks=1 00:05:52.728 --rc geninfo_unexecuted_blocks=1 00:05:52.728 00:05:52.728 ' 00:05:52.728 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:52.729 15:23:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:00.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:00.867 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:00.867 Found net devices under 0000:31:00.0: cvl_0_0 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:00.867 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:00.868 Found net devices under 0000:31:00.1: cvl_0_1 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:00.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:06:00.868 00:06:00.868 --- 10.0.0.2 ping statistics --- 00:06:00.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.868 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:06:00.868 00:06:00.868 --- 10.0.0.1 ping statistics --- 00:06:00.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.868 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=2899455 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 2899455 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2899455 ']' 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.868 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.868 [2024-10-01 15:23:39.946634] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:00.868 [2024-10-01 15:23:39.946703] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.868 [2024-10-01 15:23:39.988914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.868 [2024-10-01 15:23:40.039455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.868 [2024-10-01 15:23:40.098803] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.868 [2024-10-01 15:23:40.098861] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.868 [2024-10-01 15:23:40.098870] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.868 [2024-10-01 15:23:40.098877] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.868 [2024-10-01 15:23:40.098883] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.868 [2024-10-01 15:23:40.099054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.868 [2024-10-01 15:23:40.099396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.868 [2024-10-01 15:23:40.099397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.439 [2024-10-01 15:23:40.828256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.439 Malloc0 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.439 Delay0 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.439 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.699 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.699 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.700 [2024-10-01 15:23:40.916791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.700 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:01.700 [2024-10-01 15:23:41.055133] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:03.679 Initializing NVMe Controllers 00:06:03.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:03.679 controller IO queue size 128 less than required 00:06:03.679 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:03.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:03.679 Initialization complete. Launching workers. 00:06:03.679 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28654 00:06:03.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28715, failed to submit 62 00:06:03.680 success 28658, unsuccessful 57, failed 0 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:03.680 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:03.680 rmmod nvme_tcp 00:06:03.680 rmmod nvme_fabrics 00:06:03.940 rmmod nvme_keyring 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 2899455 ']' 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 2899455 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2899455 ']' 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2899455 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899455 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899455' 00:06:03.940 killing process with pid 2899455 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2899455 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2899455 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:03.940 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.941 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:06.488 00:06:06.488 real 0m13.531s 00:06:06.488 user 0m13.807s 00:06:06.488 sys 0m6.709s 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.488 ************************************ 00:06:06.488 END TEST nvmf_abort 00:06:06.488 ************************************ 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.488 ************************************ 00:06:06.488 START TEST nvmf_ns_hotplug_stress 00:06:06.488 ************************************ 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.488 * Looking for test storage... 00:06:06.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.488 --rc genhtml_branch_coverage=1 00:06:06.488 --rc genhtml_function_coverage=1 00:06:06.488 --rc genhtml_legend=1 00:06:06.488 --rc geninfo_all_blocks=1 00:06:06.488 --rc geninfo_unexecuted_blocks=1 00:06:06.488 00:06:06.488 ' 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.488 --rc genhtml_branch_coverage=1 00:06:06.488 --rc genhtml_function_coverage=1 00:06:06.488 --rc genhtml_legend=1 00:06:06.488 --rc geninfo_all_blocks=1 00:06:06.488 --rc geninfo_unexecuted_blocks=1 00:06:06.488 00:06:06.488 ' 00:06:06.488 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.488 --rc genhtml_branch_coverage=1 00:06:06.488 --rc genhtml_function_coverage=1 00:06:06.489 --rc genhtml_legend=1 00:06:06.489 --rc geninfo_all_blocks=1 00:06:06.489 --rc geninfo_unexecuted_blocks=1 00:06:06.489 00:06:06.489 ' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.489 --rc genhtml_branch_coverage=1 00:06:06.489 --rc genhtml_function_coverage=1 00:06:06.489 --rc genhtml_legend=1 00:06:06.489 --rc geninfo_all_blocks=1 00:06:06.489 --rc geninfo_unexecuted_blocks=1 00:06:06.489 00:06:06.489 ' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:06.489 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:14.634 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:14.634 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:14.634 Found net devices under 0000:31:00.0: cvl_0_0 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:14.634 Found net devices under 0000:31:00.1: cvl_0_1 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.634 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:14.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:06:14.635 00:06:14.635 --- 10.0.0.2 ping statistics --- 00:06:14.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.635 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:06:14.635 00:06:14.635 --- 10.0.0.1 ping statistics --- 00:06:14.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.635 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=2904451 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 2904451 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2904451 ']' 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.635 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.635 [2024-10-01 15:23:53.585004] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:14.635 [2024-10-01 15:23:53.585070] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.635 [2024-10-01 15:23:53.627622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.635 [2024-10-01 15:23:53.676634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.635 [2024-10-01 15:23:53.723093] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.635 [2024-10-01 15:23:53.723147] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.635 [2024-10-01 15:23:53.723155] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.635 [2024-10-01 15:23:53.723162] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.635 [2024-10-01 15:23:53.723174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.635 [2024-10-01 15:23:53.723337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.635 [2024-10-01 15:23:53.723478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.635 [2024-10-01 15:23:53.723480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:15.207 [2024-10-01 15:23:54.615827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.468 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.729 [2024-10-01 15:23:55.015945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.729 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.991 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:15.991 Malloc0 00:06:16.251 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:16.251 Delay0 00:06:16.251 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.511 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:16.771 NULL1 00:06:16.771 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:16.771 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:16.771 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2904938 00:06:16.771 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:16.771 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.154 Read completed with error (sct=0, sc=11) 00:06:18.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.154 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.154 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:18.154 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:18.415 true 00:06:18.415 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:18.415 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.358 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.358 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:19.358 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:19.619 true 00:06:19.619 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:19.619 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.879 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.879 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:19.879 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:20.140 true 00:06:20.140 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:20.140 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.401 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.401 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:20.401 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:20.662 true 00:06:20.662 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:20.662 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.604 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.604 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:21.604 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:21.865 true 00:06:21.865 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:21.865 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.865 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.125 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:22.125 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:22.386 true 00:06:22.386 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:22.386 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.386 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.647 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:22.647 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:22.909 true 00:06:22.909 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:22.909 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.909 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.169 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:23.169 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:23.429 true 00:06:23.429 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:23.429 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.690 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.690 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:23.690 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:23.951 true 00:06:23.951 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:23.951 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.210 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.210 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:24.211 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:24.615 true 00:06:24.615 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:24.615 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.615 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.929 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:24.929 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:24.929 true 00:06:24.929 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:24.929 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.989 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.989 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:25.989 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:26.276 true 00:06:26.276 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:26.276 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.276 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.574 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:26.574 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:26.876 true 00:06:26.876 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:26.876 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.876 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.160 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:27.160 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:27.160 true 00:06:27.160 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:27.160 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.421 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.682 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:27.682 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:27.682 true 00:06:27.943 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:27.943 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.943 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.203 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:28.203 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:28.462 true 00:06:28.462 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:28.462 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.462 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.721 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:28.721 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:28.981 true 00:06:28.981 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:28.981 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.367 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:30.367 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:30.628 true 00:06:30.628 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:30.628 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.570 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.570 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:31.570 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:31.831 true 00:06:31.831 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:31.831 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.831 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.092 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:32.092 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:32.353 true 00:06:32.353 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:32.353 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.354 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.640 [2024-10-01 15:24:11.942632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.942984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.943973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.944998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.640 [2024-10-01 15:24:11.945027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.945987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.946942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.641 [2024-10-01 15:24:11.947692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.947968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.948739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.949990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.642 [2024-10-01 15:24:11.950422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.950986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 Message suppressed 999 times: [2024-10-01 15:24:11.951470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 Read completed with error (sct=0, sc=15) 00:06:32.643 [2024-10-01 15:24:11.951507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.951541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.952987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.643 [2024-10-01 15:24:11.953340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.953913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.954974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.644 [2024-10-01 15:24:11.955905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.956994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.645 [2024-10-01 15:24:11.957991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:32.646 [2024-10-01 15:24:11.958769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.958997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:32.646 [2024-10-01 15:24:11.959153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.959987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.646 [2024-10-01 15:24:11.960831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.960861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.960890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.960924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.960961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.960991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.961973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.962996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.647 [2024-10-01 15:24:11.963644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.963993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.964994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.965988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.648 [2024-10-01 15:24:11.966423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.966979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.967440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.968978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.649 [2024-10-01 15:24:11.969358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.969983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.970978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.971995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.650 [2024-10-01 15:24:11.972259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.972972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.973983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.651 [2024-10-01 15:24:11.974362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.974757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.975982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.976986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.652 [2024-10-01 15:24:11.977684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.977982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.978999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.979986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.653 [2024-10-01 15:24:11.980447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.980999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.981979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.654 [2024-10-01 15:24:11.982979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.983806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.984993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.655 [2024-10-01 15:24:11.985859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.985888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.985918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.985945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.985978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:32.656 [2024-10-01 15:24:11.986575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.986990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.987997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.656 [2024-10-01 15:24:11.988359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.988986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.989981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.990991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.657 [2024-10-01 15:24:11.991239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.991985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.992995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.993456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.658 [2024-10-01 15:24:11.994736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.994981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.995882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.996983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.659 [2024-10-01 15:24:11.997587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.997994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.998973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:11.999991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.660 [2024-10-01 15:24:12.000503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.000990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.001808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.002980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.661 [2024-10-01 15:24:12.003459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.003979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.004994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.662 [2024-10-01 15:24:12.005860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.005890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.005924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.005950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.005978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.006975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.007994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.008976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.663 [2024-10-01 15:24:12.009004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.009975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.010992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.664 [2024-10-01 15:24:12.011824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.011853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.011883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.011924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.011954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.011981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.012969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.013866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.665 [2024-10-01 15:24:12.014802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.014830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.014861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.014889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.014920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.014944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.014977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.015974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.016987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.666 [2024-10-01 15:24:12.017607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.017642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.017671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.017709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.017735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.017768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.017797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.017824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.018985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.667 [2024-10-01 15:24:12.019968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.019998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.020712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.021980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.668 [2024-10-01 15:24:12.022774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.022803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.022830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.022863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.022899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.022928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.022957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.022984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:32.669 [2024-10-01 15:24:12.023281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.023967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.024527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.669 [2024-10-01 15:24:12.025720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.025997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.026870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.027989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.670 [2024-10-01 15:24:12.028872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.028903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.028941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.028973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.029984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.030999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.671 [2024-10-01 15:24:12.031971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.032988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.033816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.672 [2024-10-01 15:24:12.034382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.034990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.035976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.036975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.673 [2024-10-01 15:24:12.037010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.037994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.038983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.674 [2024-10-01 15:24:12.039442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.039469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.039497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.039855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.039885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.039914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.039945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.039974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.040979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.041787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.675 [2024-10-01 15:24:12.042521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.042981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.043983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.044995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.045024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.045054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.045081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.045111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.045144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.676 [2024-10-01 15:24:12.045172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.045991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.046982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.677 [2024-10-01 15:24:12.047978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.048430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.049975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.678 [2024-10-01 15:24:12.050762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.050799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.050828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.050882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.050914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.050946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.050973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.051998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.052971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.679 [2024-10-01 15:24:12.053673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.053972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.054975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.055975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.680 [2024-10-01 15:24:12.056201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.056971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.057985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:32.681 [2024-10-01 15:24:12.058897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.058989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.059019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.059047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.681 [2024-10-01 15:24:12.059076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.059983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.060591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.061966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.682 [2024-10-01 15:24:12.062207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.062996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.063962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.064991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.065018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.065053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.065082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.683 [2024-10-01 15:24:12.065112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.065833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.066999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.684 [2024-10-01 15:24:12.067773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.067802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.067831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.067855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.067881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.067920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.068978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.685 [2024-10-01 15:24:12.069575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.069977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.979 [2024-10-01 15:24:12.070288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.070975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.071977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.072523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.980 [2024-10-01 15:24:12.073591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.073973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.074967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.075974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.981 [2024-10-01 15:24:12.076302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.076997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.077994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.078993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.079022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.079051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.079079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.079119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.079158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.079196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.982 [2024-10-01 15:24:12.079224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.079991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.080979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.081986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.082015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.082043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.082072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.082099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.082128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.983 [2024-10-01 15:24:12.082161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.082987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.083985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.084989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.085021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.085053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.085083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.984 [2024-10-01 15:24:12.085112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.085984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.086907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.985 [2024-10-01 15:24:12.087501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.087986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.088913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.089991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.090019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.090050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.090079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.986 [2024-10-01 15:24:12.090109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.090986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.091835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.092990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.987 [2024-10-01 15:24:12.093228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.093992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:32.988 [2024-10-01 15:24:12.094584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.094983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.095767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.096089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.096120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.096150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.988 [2024-10-01 15:24:12.096177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.096984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.097963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.098589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.099377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.099408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.099437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.099463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.989 [2024-10-01 15:24:12.099493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.099977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.100980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.990 [2024-10-01 15:24:12.101598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.101975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.102851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.103990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.991 [2024-10-01 15:24:12.104563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.104976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.105657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.106968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.992 [2024-10-01 15:24:12.107565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.107989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.108972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.109698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.993 [2024-10-01 15:24:12.110402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.110975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.111922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.112999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.113021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.994 [2024-10-01 15:24:12.113053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.113983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.114975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.995 [2024-10-01 15:24:12.115644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.115995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 true 00:06:32.996 [2024-10-01 15:24:12.116134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.116999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.117995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.996 [2024-10-01 15:24:12.118387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.118984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.119990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.997 [2024-10-01 15:24:12.120890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.120921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.120947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.120974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.121990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.122976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.123444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.998 [2024-10-01 15:24:12.124592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.124997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.125981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.126978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:32.999 [2024-10-01 15:24:12.127005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.127997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.128981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.000 [2024-10-01 15:24:12.129742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.129983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.001 [2024-10-01 15:24:12.130877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.130995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.131963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.001 [2024-10-01 15:24:12.132322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.132745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.133979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.134993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.002 [2024-10-01 15:24:12.135710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.135984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.136980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.137985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.003 [2024-10-01 15:24:12.138670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.138972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.139972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.140978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.004 [2024-10-01 15:24:12.141220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.141656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.142985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.143994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.144022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:33.005 [2024-10-01 15:24:12.144050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.144080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.005 [2024-10-01 15:24:12.144109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.006 [2024-10-01 15:24:12.144445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.144993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.145991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.006 [2024-10-01 15:24:12.146852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.146875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.146903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.146926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.146951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.146974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.146998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.147996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.148970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.007 [2024-10-01 15:24:12.149499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.149973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.150836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.151996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.008 [2024-10-01 15:24:12.152451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.152980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.153997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.154988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.009 [2024-10-01 15:24:12.155540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.155976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.156993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.157823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.010 [2024-10-01 15:24:12.158375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.158974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.159872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.160972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.161001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.161061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.161090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.161126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.011 [2024-10-01 15:24:12.161155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.161989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.162975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.012 [2024-10-01 15:24:12.163592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.163998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.164996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.013 [2024-10-01 15:24:12.165436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.165975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.013 [2024-10-01 15:24:12.166336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.166829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.167999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.168972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.014 [2024-10-01 15:24:12.169351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.169586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.170992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.171931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.172069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.172098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.172128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.172152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.172182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.015 [2024-10-01 15:24:12.172213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.172681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.173987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.174866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.175006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.175036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.175084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.016 [2024-10-01 15:24:12.175113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.175986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.176986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.017 [2024-10-01 15:24:12.177579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.177992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.178986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.179753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.018 [2024-10-01 15:24:12.180491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.180976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.181991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.182979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.019 [2024-10-01 15:24:12.183235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.183974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.184990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.185976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.020 [2024-10-01 15:24:12.186776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.186899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.186932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.186961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.186988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.187976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.188847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.021 [2024-10-01 15:24:12.189992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.190977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.191994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.192998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.022 [2024-10-01 15:24:12.193283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.193996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.194999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.195983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.023 [2024-10-01 15:24:12.196488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.196978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.197980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.198974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.024 [2024-10-01 15:24:12.199957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.199987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.200979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.025 [2024-10-01 15:24:12.201458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.201996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.202766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.203119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.025 [2024-10-01 15:24:12.203153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.203988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.204998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.205998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.026 [2024-10-01 15:24:12.206843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.206874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.206911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.206944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.206976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.207979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.208995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.027 [2024-10-01 15:24:12.209410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.209978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.210993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.211979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.212998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.213025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.213054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.213080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.213107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.028 [2024-10-01 15:24:12.213134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.213994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.214826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.215976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.216972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.217001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.029 [2024-10-01 15:24:12.217040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.217972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.218974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.030 [2024-10-01 15:24:12.219777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.219804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.219832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.219862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.220974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.221974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.222978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.031 [2024-10-01 15:24:12.223426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.223988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.224991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.032 [2024-10-01 15:24:12.225742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.225977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.226982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.227973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.228993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.229025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.229052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.229081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.229110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.033 [2024-10-01 15:24:12.229141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.229954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.230991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.231992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.034 [2024-10-01 15:24:12.232494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.232981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.233915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.234971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.035 [2024-10-01 15:24:12.235900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.235950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.235978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.236799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.036 [2024-10-01 15:24:12.237605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.237988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.036 [2024-10-01 15:24:12.238933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.238959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.238990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.239999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.240973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.241982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.037 [2024-10-01 15:24:12.242239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.242976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.243616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.244977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.038 [2024-10-01 15:24:12.245629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.245942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.246995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.247988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.039 [2024-10-01 15:24:12.248993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.249988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.250995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.251990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.252021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.252049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.252085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.252114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.252145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.040 [2024-10-01 15:24:12.252176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.252909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.253974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.254990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.041 [2024-10-01 15:24:12.255358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.255997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.256972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.257976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.042 [2024-10-01 15:24:12.258839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.258867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.258901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.258931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.258960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.258990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.259940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.260991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.261971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.043 [2024-10-01 15:24:12.262340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.262970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.263983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.264891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.044 [2024-10-01 15:24:12.265797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.265831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.265861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.265889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.265923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.265953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.265982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.266968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.267506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.268993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.045 [2024-10-01 15:24:12.269560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.269978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.270955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.271988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.046 [2024-10-01 15:24:12.272856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.272886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.272919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.272952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.272985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.273985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.047 [2024-10-01 15:24:12.274747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.274977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.047 [2024-10-01 15:24:12.275867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.275902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.275934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.275965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.275992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.276785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.277993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.278796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.279436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.279467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.279497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.279525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.279553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.279583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.048 [2024-10-01 15:24:12.279610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.279979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.280981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.281549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.282974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.283003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.283031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.283060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.049 [2024-10-01 15:24:12.283088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.283983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.284978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.050 [2024-10-01 15:24:12.285841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.285870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.286989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.287981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.288998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.051 [2024-10-01 15:24:12.289762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.289800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.289836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.289865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.289892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.289926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.289954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.289978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.290882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.291974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.052 [2024-10-01 15:24:12.292841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.292871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.292909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.292937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.292966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.293991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.294981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.295851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.053 [2024-10-01 15:24:12.296418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.296988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.297986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.298981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.054 [2024-10-01 15:24:12.299668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.299701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.299733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.299762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.299791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.299925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.299956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.299987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 [2024-10-01 15:24:12.300254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.055 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.353 [2024-10-01 15:24:12.477764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.477808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.477835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.477864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.477899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.477930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.477959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.477989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.478019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.478049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.478078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.478107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.478137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.353 [2024-10-01 15:24:12.478168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.478980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.479679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.480990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.354 [2024-10-01 15:24:12.481271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.481848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.482976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.483999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.355 [2024-10-01 15:24:12.484591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.484998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.485992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.486995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.356 [2024-10-01 15:24:12.487726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.487975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.488439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.489980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.490980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.491123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.491155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.491182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.491212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.357 [2024-10-01 15:24:12.491243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.491989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.492978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.358 [2024-10-01 15:24:12.493831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.493860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.493898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.493930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.493956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.493983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.494994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.495985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.359 [2024-10-01 15:24:12.496968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.496998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.497988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.498995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.499648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.360 [2024-10-01 15:24:12.500470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.500989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.501846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.361 [2024-10-01 15:24:12.502519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.502976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.361 [2024-10-01 15:24:12.503604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.503973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.504981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.505980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.362 [2024-10-01 15:24:12.506959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.506987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.507987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.508980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.363 [2024-10-01 15:24:12.509857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.509888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.509921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.509951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.509981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.510794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:33.364 [2024-10-01 15:24:12.511151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:33.364 [2024-10-01 15:24:12.511437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.511985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.364 [2024-10-01 15:24:12.512921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.512950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.513978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.514999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.515983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.365 [2024-10-01 15:24:12.516486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.516989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.517971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.518980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.366 [2024-10-01 15:24:12.519614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.519642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.519669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.519705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.520999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.521873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.367 [2024-10-01 15:24:12.522748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.522995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.523992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.524963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.368 [2024-10-01 15:24:12.525973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.526985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.527985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.528694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.369 [2024-10-01 15:24:12.529483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.529972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.530986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.531720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.370 [2024-10-01 15:24:12.532704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.532988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.533864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.534970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.371 [2024-10-01 15:24:12.535697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.535971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.536972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.537996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 [2024-10-01 15:24:12.538702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.372 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.372 [2024-10-01 15:24:12.538752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.538992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.539996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.540991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.373 [2024-10-01 15:24:12.541909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.541937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.541964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.541994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.542988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.543977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.544818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.374 [2024-10-01 15:24:12.545416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.545976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.546987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.547985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.375 [2024-10-01 15:24:12.548535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.548974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.549974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.550981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.551974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.552009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.552047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.376 [2024-10-01 15:24:12.552075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.552997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.377 [2024-10-01 15:24:12.553884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.553919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.553950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.553979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.554991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.555979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.556855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.557297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.557327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.557354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.557382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.378 [2024-10-01 15:24:12.557418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.557997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.558986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.559994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.379 [2024-10-01 15:24:12.560468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.560977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.561977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.562978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.380 [2024-10-01 15:24:12.563984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.564994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.565959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.566985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.567014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.567042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.567075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.567104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.567133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.381 [2024-10-01 15:24:12.567163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.567980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.568658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.569982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.570010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.382 [2024-10-01 15:24:12.570039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.570996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.571979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.572972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.383 [2024-10-01 15:24:12.573960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.573987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.384 [2024-10-01 15:24:12.574108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.574977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.575999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.384 [2024-10-01 15:24:12.576952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.576979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.577807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.578994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.579979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.580008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.580585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.385 [2024-10-01 15:24:12.580616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.580985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.581983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.582995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.386 [2024-10-01 15:24:12.583881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.583914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.583945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.583972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.584973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.585998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.387 [2024-10-01 15:24:12.586489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.586975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.587331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.588973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.589947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.590070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.590099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.388 [2024-10-01 15:24:12.590126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.590982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.591974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.592985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.389 [2024-10-01 15:24:12.593445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.593972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.594890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.595999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.390 [2024-10-01 15:24:12.596775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.596803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.596834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.596863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.596890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.596924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.596974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.597834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.598993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.391 [2024-10-01 15:24:12.599820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.599850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.599879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.599911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.599940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.599970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.599994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.600988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.601977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.602989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.392 [2024-10-01 15:24:12.603356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.603992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.604646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.605983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.393 [2024-10-01 15:24:12.606819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.606855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.606887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.606923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.606951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.606979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.607978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.608999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.394 [2024-10-01 15:24:12.609724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.609996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.610039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.610068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.610110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.610138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.610169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.610198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.394 [2024-10-01 15:24:12.610227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.610972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.611995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.612984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.395 [2024-10-01 15:24:12.613362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.613750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.614996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.615982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.396 [2024-10-01 15:24:12.616796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.616839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.616871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.616905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.616936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.616963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.616994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.617998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.618554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.619985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.397 [2024-10-01 15:24:12.620437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.620983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.621991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.622982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.398 [2024-10-01 15:24:12.623978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.624989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.625993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.399 [2024-10-01 15:24:12.626481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.626977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.627681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.628983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.400 [2024-10-01 15:24:12.629529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.629858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.630982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.631986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.401 [2024-10-01 15:24:12.632653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.632977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.633983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.634986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.635979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.636007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.402 [2024-10-01 15:24:12.636037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.636841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.637980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.403 [2024-10-01 15:24:12.638699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.638728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.638757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.638790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.638822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.638852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.638882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.638912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.639986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.640971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.641986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.404 [2024-10-01 15:24:12.642372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.642993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.643981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.644982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.405 [2024-10-01 15:24:12.645462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.645490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.645520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.645548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.645578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.645608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.645636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.645668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.406 [2024-10-01 15:24:12.646227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.646978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.647987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.406 [2024-10-01 15:24:12.648605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.648642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.648673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.648702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.648983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.649989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.650912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.651972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.407 [2024-10-01 15:24:12.652005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.652987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.653996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.654980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.655008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.655035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.655065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.655102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.655142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.655177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.408 [2024-10-01 15:24:12.655211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.655236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.655261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.655289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.655321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.656979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.657999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.658974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.409 [2024-10-01 15:24:12.659006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.659987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.660024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.660062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.660095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 true 00:06:33.410 [2024-10-01 15:24:12.660982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.661982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.410 [2024-10-01 15:24:12.662391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.662856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.663994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.664850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.411 [2024-10-01 15:24:12.665681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.665983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.666966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.667988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.412 [2024-10-01 15:24:12.668708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.668994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.669995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.670979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.671989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.413 [2024-10-01 15:24:12.672260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.672981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.673976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.674998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.414 [2024-10-01 15:24:12.675488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.675996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.676984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.677970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.415 [2024-10-01 15:24:12.678337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.678643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.679998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.680976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.416 [2024-10-01 15:24:12.681674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.416 [2024-10-01 15:24:12.681854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.681884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.681918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.681947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.681975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.682978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.683972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.684989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.417 [2024-10-01 15:24:12.685019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.685979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.686998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:33.418 [2024-10-01 15:24:12.687033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.418 [2024-10-01 15:24:12.687421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.687988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.418 [2024-10-01 15:24:12.688399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.688995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.689971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.690978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.419 [2024-10-01 15:24:12.691534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.691997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.692673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.693994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.420 [2024-10-01 15:24:12.694910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.694943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.694971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.695835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.696997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.697999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.421 [2024-10-01 15:24:12.698336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.698977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.699972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.700991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.422 [2024-10-01 15:24:12.701512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.701983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.702986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.703985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.704981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.705013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.423 [2024-10-01 15:24:12.705042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.705995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.706999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.707972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.708007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.708038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.424 [2024-10-01 15:24:12.708073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.708972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.709969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.710984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.425 [2024-10-01 15:24:12.711742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.711768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.711797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.711941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.711972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.712983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.713990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.714982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.715014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.426 [2024-10-01 15:24:12.715043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.715993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.427 [2024-10-01 15:24:12.716662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.716983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.717996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.427 [2024-10-01 15:24:12.718462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.718981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.719989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.720990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.428 [2024-10-01 15:24:12.721828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.721855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.721884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.721916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.721943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.721968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.721999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.722780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.723986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.429 [2024-10-01 15:24:12.724567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.724972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.725980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.726989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.727997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.430 [2024-10-01 15:24:12.728432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.728993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.729995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.730992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.431 [2024-10-01 15:24:12.731578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.731988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.732993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.733995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.432 [2024-10-01 15:24:12.734654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.734683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.734712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.734740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.734775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.734804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.735978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.736969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.433 [2024-10-01 15:24:12.737618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.737984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.738691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.739972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.434 [2024-10-01 15:24:12.740929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.740960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.740989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.741912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.742985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.743997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.744025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.744052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.435 [2024-10-01 15:24:12.744078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.744983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.745971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.746989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.436 [2024-10-01 15:24:12.747283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.747988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.748994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.749990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.437 [2024-10-01 15:24:12.750613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.750983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:33.438 [2024-10-01 15:24:12.751778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.751984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.752991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.438 [2024-10-01 15:24:12.753539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.753984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.754985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.755976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.439 [2024-10-01 15:24:12.756585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.756994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.757960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.758974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:33.440 [2024-10-01 15:24:12.759503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.384 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.646 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:34.646 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:34.907 true 00:06:34.907 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:34.907 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.850 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.850 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:35.850 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:36.111 true 00:06:36.111 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:36.111 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.111 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.373 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:36.373 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:36.635 true 00:06:36.635 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:36.635 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.575 15:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.836 15:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:37.836 15:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:38.097 true 00:06:38.097 15:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:38.097 15:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.038 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.038 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:39.038 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:39.299 true 00:06:39.299 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:39.299 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.560 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.560 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:39.560 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:39.818 true 00:06:39.818 15:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:39.818 15:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.204 15:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.204 15:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:41.204 15:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:41.466 true 00:06:41.466 15:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:41.466 15:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.299 15:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.300 15:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:42.300 15:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:42.561 true 00:06:42.561 15:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:42.561 15:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.821 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.821 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:42.821 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:43.081 true 00:06:43.081 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:43.082 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.343 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.343 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:43.343 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:43.603 true 00:06:43.603 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:43.603 15:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.864 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.125 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:44.125 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:44.125 true 00:06:44.125 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:44.125 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.387 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.649 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:44.649 15:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:44.649 true 00:06:44.649 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:44.649 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.591 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.852 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:45.852 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:45.852 true 00:06:45.852 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:45.852 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.114 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.374 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:46.374 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:46.374 true 00:06:46.374 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:46.374 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.759 Initializing NVMe Controllers 00:06:47.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:47.759 Controller IO queue size 128, less than required. 00:06:47.759 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.759 Controller IO queue size 128, less than required. 00:06:47.759 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:47.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:47.759 Initialization complete. Launching workers. 00:06:47.759 ======================================================== 00:06:47.759 Latency(us) 00:06:47.759 Device Information : IOPS MiB/s Average min max 00:06:47.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3223.89 1.57 22718.55 1535.74 1023509.16 00:06:47.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16542.34 8.08 7737.83 1126.39 399563.72 00:06:47.759 ======================================================== 00:06:47.759 Total : 19766.23 9.65 10181.20 1126.39 1023509.16 00:06:47.759 00:06:47.759 15:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.759 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:47.759 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:48.021 true 00:06:48.021 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904938 00:06:48.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2904938) - No such process 00:06:48.021 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2904938 00:06:48.021 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.282 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.282 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:48.282 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:48.282 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:48.282 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.282 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:48.544 null0 00:06:48.544 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.544 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.544 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:48.804 null1 00:06:48.804 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.804 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.804 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:48.804 null2 00:06:49.064 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.064 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.064 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:49.064 null3 00:06:49.064 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.064 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.064 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:49.324 null4 00:06:49.324 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.324 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.324 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:49.324 null5 00:06:49.585 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.585 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.585 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:49.585 null6 00:06:49.585 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.585 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.585 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:49.846 null7 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2912179 2912181 2912184 2912187 2912190 2912192 2912194 2912196 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.846 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.108 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.369 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.370 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.370 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.370 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.632 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.632 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.632 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.632 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.632 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.894 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.155 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.418 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.419 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.681 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.681 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.681 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.681 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.941 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.199 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.200 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.459 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.719 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.719 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.719 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.719 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.719 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.719 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.719 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.719 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.980 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.980 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.980 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.980 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.980 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.981 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.243 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.505 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.766 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.766 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.766 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.766 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.766 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.767 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.767 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.767 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:53.767 rmmod nvme_tcp 00:06:53.767 rmmod nvme_fabrics 00:06:53.767 rmmod nvme_keyring 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 2904451 ']' 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 2904451 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2904451 ']' 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2904451 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.767 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2904451 00:06:54.027 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:54.027 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:54.027 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2904451' 00:06:54.027 killing process with pid 2904451 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2904451 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2904451 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.028 15:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:56.578 00:06:56.578 real 0m49.885s 00:06:56.578 user 3m16.135s 00:06:56.578 sys 0m16.570s 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:56.578 ************************************ 00:06:56.578 END TEST nvmf_ns_hotplug_stress 00:06:56.578 ************************************ 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.578 ************************************ 00:06:56.578 START TEST nvmf_delete_subsystem 00:06:56.578 ************************************ 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:56.578 * Looking for test storage... 00:06:56.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.578 --rc genhtml_branch_coverage=1 00:06:56.578 --rc genhtml_function_coverage=1 00:06:56.578 --rc genhtml_legend=1 00:06:56.578 --rc geninfo_all_blocks=1 00:06:56.578 --rc geninfo_unexecuted_blocks=1 00:06:56.578 00:06:56.578 ' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.578 --rc genhtml_branch_coverage=1 00:06:56.578 --rc genhtml_function_coverage=1 00:06:56.578 --rc genhtml_legend=1 00:06:56.578 --rc geninfo_all_blocks=1 00:06:56.578 --rc geninfo_unexecuted_blocks=1 00:06:56.578 00:06:56.578 ' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.578 --rc genhtml_branch_coverage=1 00:06:56.578 --rc genhtml_function_coverage=1 00:06:56.578 --rc genhtml_legend=1 00:06:56.578 --rc geninfo_all_blocks=1 00:06:56.578 --rc geninfo_unexecuted_blocks=1 00:06:56.578 00:06:56.578 ' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.578 --rc genhtml_branch_coverage=1 00:06:56.578 --rc genhtml_function_coverage=1 00:06:56.578 --rc genhtml_legend=1 00:06:56.578 --rc geninfo_all_blocks=1 00:06:56.578 --rc geninfo_unexecuted_blocks=1 00:06:56.578 00:06:56.578 ' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.578 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:56.579 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:04.725 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:04.725 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:04.725 Found net devices under 0000:31:00.0: cvl_0_0 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:04.725 Found net devices under 0000:31:00.1: cvl_0_1 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.725 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:07:04.726 00:07:04.726 --- 10.0.0.2 ping statistics --- 00:07:04.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.726 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:07:04.726 00:07:04.726 --- 10.0.0.1 ping statistics --- 00:07:04.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.726 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=2917583 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 2917583 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2917583 ']' 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.726 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.726 [2024-10-01 15:24:43.536997] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:04.726 [2024-10-01 15:24:43.537062] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.726 [2024-10-01 15:24:43.578770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.726 [2024-10-01 15:24:43.627031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.726 [2024-10-01 15:24:43.672622] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.726 [2024-10-01 15:24:43.672679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.726 [2024-10-01 15:24:43.672687] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.726 [2024-10-01 15:24:43.672695] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.726 [2024-10-01 15:24:43.672701] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.726 [2024-10-01 15:24:43.672859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.726 [2024-10-01 15:24:43.672860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.987 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.987 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:04.987 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:04.987 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:04.987 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.987 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.988 [2024-10-01 15:24:44.413403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.988 [2024-10-01 15:24:44.437772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.988 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.251 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:05.251 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.251 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.251 NULL1 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.252 Delay0 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2917633 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:05.252 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:05.252 [2024-10-01 15:24:44.554748] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:07.167 15:24:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.167 15:24:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.167 15:24:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 [2024-10-01 15:24:46.699082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbce800d450 is same with the state(6) to be set 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 starting I/O failed: -6 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 [2024-10-01 15:24:46.699681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfe0f0 is same with the state(6) to be set 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Write completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.429 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Write completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:07.430 Read completed with error (sct=0, sc=8) 00:07:08.371 [2024-10-01 15:24:47.653398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02e20 is same with the state(6) to be set 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 [2024-10-01 15:24:47.695175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfe2d0 is same with the state(6) to be set 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 [2024-10-01 15:24:47.698054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfdf10 is same with the state(6) to be set 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 [2024-10-01 15:24:47.700353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbce800cfe0 is same with the state(6) to be set 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Write completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.371 Read completed with error (sct=0, sc=8) 00:07:08.372 Write completed with error (sct=0, sc=8) 00:07:08.372 Read completed with error (sct=0, sc=8) 00:07:08.372 Write completed with error (sct=0, sc=8) 00:07:08.372 Read completed with error (sct=0, sc=8) 00:07:08.372 [2024-10-01 15:24:47.701911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbce800d780 is same with the state(6) to be set 00:07:08.372 Initializing NVMe Controllers 00:07:08.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.372 Controller IO queue size 128, less than required. 00:07:08.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:08.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:08.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:08.372 Initialization complete. Launching workers. 00:07:08.372 ======================================================== 00:07:08.372 Latency(us) 00:07:08.372 Device Information : IOPS MiB/s Average min max 00:07:08.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.22 0.08 896511.36 338.95 1012469.57 00:07:08.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.80 0.08 924081.96 345.12 1012057.34 00:07:08.372 ======================================================== 00:07:08.372 Total : 327.02 0.16 909815.53 338.95 1012469.57 00:07:08.372 00:07:08.372 [2024-10-01 15:24:47.702427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe02e20 (9): Bad file descriptor 00:07:08.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:08.372 15:24:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.372 15:24:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:08.372 15:24:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2917633 00:07:08.372 15:24:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2917633 00:07:08.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2917633) - No such process 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2917633 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2917633 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2917633 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.943 [2024-10-01 15:24:48.231968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.943 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2918467 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:08.944 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.944 [2024-10-01 15:24:48.319738] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:09.514 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.514 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:09.514 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.087 15:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.087 15:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:10.087 15:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.349 15:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.349 15:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:10.349 15:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.923 15:24:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.923 15:24:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:10.923 15:24:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.495 15:24:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.495 15:24:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:11.495 15:24:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.068 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.068 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:12.068 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.068 Initializing NVMe Controllers 00:07:12.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:12.068 Controller IO queue size 128, less than required. 00:07:12.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:12.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:12.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:12.068 Initialization complete. Launching workers. 00:07:12.068 ======================================================== 00:07:12.068 Latency(us) 00:07:12.068 Device Information : IOPS MiB/s Average min max 00:07:12.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001944.96 1000111.48 1007174.18 00:07:12.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003265.51 1000336.23 1009558.98 00:07:12.068 ======================================================== 00:07:12.068 Total : 256.00 0.12 1002605.23 1000111.48 1009558.98 00:07:12.068 00:07:12.331 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.331 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2918467 00:07:12.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2918467) - No such process 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2918467 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.593 rmmod nvme_tcp 00:07:12.593 rmmod nvme_fabrics 00:07:12.593 rmmod nvme_keyring 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 2917583 ']' 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 2917583 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2917583 ']' 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2917583 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2917583 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2917583' 00:07:12.593 killing process with pid 2917583 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2917583 00:07:12.593 15:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2917583 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.593 15:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.150 00:07:15.150 real 0m18.607s 00:07:15.150 user 0m30.813s 00:07:15.150 sys 0m6.951s 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.150 ************************************ 00:07:15.150 END TEST nvmf_delete_subsystem 00:07:15.150 ************************************ 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.150 ************************************ 00:07:15.150 START TEST nvmf_host_management 00:07:15.150 ************************************ 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:15.150 * Looking for test storage... 00:07:15.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:15.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.150 --rc genhtml_branch_coverage=1 00:07:15.150 --rc genhtml_function_coverage=1 00:07:15.150 --rc genhtml_legend=1 00:07:15.150 --rc geninfo_all_blocks=1 00:07:15.150 --rc geninfo_unexecuted_blocks=1 00:07:15.150 00:07:15.150 ' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:15.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.150 --rc genhtml_branch_coverage=1 00:07:15.150 --rc genhtml_function_coverage=1 00:07:15.150 --rc genhtml_legend=1 00:07:15.150 --rc geninfo_all_blocks=1 00:07:15.150 --rc geninfo_unexecuted_blocks=1 00:07:15.150 00:07:15.150 ' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:15.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.150 --rc genhtml_branch_coverage=1 00:07:15.150 --rc genhtml_function_coverage=1 00:07:15.150 --rc genhtml_legend=1 00:07:15.150 --rc geninfo_all_blocks=1 00:07:15.150 --rc geninfo_unexecuted_blocks=1 00:07:15.150 00:07:15.150 ' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:15.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.150 --rc genhtml_branch_coverage=1 00:07:15.150 --rc genhtml_function_coverage=1 00:07:15.150 --rc genhtml_legend=1 00:07:15.150 --rc geninfo_all_blocks=1 00:07:15.150 --rc geninfo_unexecuted_blocks=1 00:07:15.150 00:07:15.150 ' 00:07:15.150 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:15.151 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:23.300 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:23.300 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:23.300 Found net devices under 0000:31:00.0: cvl_0_0 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:23.300 Found net devices under 0000:31:00.1: cvl_0_1 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.300 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.300 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.300 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:07:23.301 00:07:23.301 --- 10.0.0.2 ping statistics --- 00:07:23.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.301 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:07:23.301 00:07:23.301 --- 10.0.0.1 ping statistics --- 00:07:23.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.301 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=2923586 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 2923586 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2923586 ']' 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.301 15:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.301 [2024-10-01 15:25:02.284351] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:23.301 [2024-10-01 15:25:02.284420] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.301 [2024-10-01 15:25:02.326508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.301 [2024-10-01 15:25:02.374324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.301 [2024-10-01 15:25:02.423756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.301 [2024-10-01 15:25:02.423806] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.301 [2024-10-01 15:25:02.423816] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.301 [2024-10-01 15:25:02.423823] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.301 [2024-10-01 15:25:02.423829] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.301 [2024-10-01 15:25:02.423938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.301 [2024-10-01 15:25:02.424177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.301 [2024-10-01 15:25:02.424338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.301 [2024-10-01 15:25:02.424338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.874 [2024-10-01 15:25:03.158306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.874 Malloc0 00:07:23.874 [2024-10-01 15:25:03.227649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2923763 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2923763 /var/tmp/bdevperf.sock 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2923763 ']' 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:23.874 { 00:07:23.874 "params": { 00:07:23.874 "name": "Nvme$subsystem", 00:07:23.874 "trtype": "$TEST_TRANSPORT", 00:07:23.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.874 "adrfam": "ipv4", 00:07:23.874 "trsvcid": "$NVMF_PORT", 00:07:23.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.874 "hdgst": ${hdgst:-false}, 00:07:23.874 "ddgst": ${ddgst:-false} 00:07:23.874 }, 00:07:23.874 "method": "bdev_nvme_attach_controller" 00:07:23.874 } 00:07:23.874 EOF 00:07:23.874 )") 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:23.874 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:23.875 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:23.875 15:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:23.875 "params": { 00:07:23.875 "name": "Nvme0", 00:07:23.875 "trtype": "tcp", 00:07:23.875 "traddr": "10.0.0.2", 00:07:23.875 "adrfam": "ipv4", 00:07:23.875 "trsvcid": "4420", 00:07:23.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:23.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:23.875 "hdgst": false, 00:07:23.875 "ddgst": false 00:07:23.875 }, 00:07:23.875 "method": "bdev_nvme_attach_controller" 00:07:23.875 }' 00:07:24.137 [2024-10-01 15:25:03.336305] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:24.137 [2024-10-01 15:25:03.336376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923763 ] 00:07:24.137 [2024-10-01 15:25:03.370936] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.137 [2024-10-01 15:25:03.420456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.137 [2024-10-01 15:25:03.467491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.399 Running I/O for 10 seconds... 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.973 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:24.974 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:24.974 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:24.974 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:24.974 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:24.974 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:24.974 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.974 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.974 [2024-10-01 15:25:04.250398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.974 [2024-10-01 15:25:04.250945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.974 [2024-10-01 15:25:04.250954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.250961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.250972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.250979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.250989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.250997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.975 [2024-10-01 15:25:04.251587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.975 [2024-10-01 15:25:04.251594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.976 [2024-10-01 15:25:04.251603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b730d0 is same with the state(6) to be set 00:07:24.976 [2024-10-01 15:25:04.251675] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b730d0 was disconnected and freed. reset controller. 00:07:24.976 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.976 [2024-10-01 15:25:04.252929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:24.976 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:24.976 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.976 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.976 task offset: 114560 on job bdev=Nvme0n1 fails 00:07:24.976 00:07:24.976 Latency(us) 00:07:24.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.976 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:24.976 Job: Nvme0n1 ended in about 0.59 seconds with error 00:07:24.976 Verification LBA range: start 0x0 length 0x400 00:07:24.976 Nvme0n1 : 0.59 1421.15 88.82 109.32 0.00 40825.57 5870.93 36263.25 00:07:24.976 =================================================================================================================== 00:07:24.976 Total : 1421.15 88.82 109.32 0.00 40825.57 5870.93 36263.25 00:07:24.976 [2024-10-01 15:25:04.255163] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.976 [2024-10-01 15:25:04.255203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959e50 (9): Bad file descriptor 00:07:24.976 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.976 15:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:24.976 [2024-10-01 15:25:04.389007] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2923763 00:07:25.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2923763) - No such process 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:25.919 { 00:07:25.919 "params": { 00:07:25.919 "name": "Nvme$subsystem", 00:07:25.919 "trtype": "$TEST_TRANSPORT", 00:07:25.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.919 "adrfam": "ipv4", 00:07:25.919 "trsvcid": "$NVMF_PORT", 00:07:25.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.919 "hdgst": ${hdgst:-false}, 00:07:25.919 "ddgst": ${ddgst:-false} 00:07:25.919 }, 00:07:25.919 "method": "bdev_nvme_attach_controller" 00:07:25.919 } 00:07:25.919 EOF 00:07:25.919 )") 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:25.919 15:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:25.919 "params": { 00:07:25.919 "name": "Nvme0", 00:07:25.919 "trtype": "tcp", 00:07:25.919 "traddr": "10.0.0.2", 00:07:25.919 "adrfam": "ipv4", 00:07:25.919 "trsvcid": "4420", 00:07:25.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:25.919 "hdgst": false, 00:07:25.919 "ddgst": false 00:07:25.919 }, 00:07:25.919 "method": "bdev_nvme_attach_controller" 00:07:25.919 }' 00:07:25.919 [2024-10-01 15:25:05.338735] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:25.919 [2024-10-01 15:25:05.338795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924133 ] 00:07:25.919 [2024-10-01 15:25:05.369553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.181 [2024-10-01 15:25:05.415615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.181 [2024-10-01 15:25:05.446427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.442 Running I/O for 1 seconds... 00:07:27.383 1883.00 IOPS, 117.69 MiB/s 00:07:27.383 Latency(us) 00:07:27.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.383 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:27.383 Verification LBA range: start 0x0 length 0x400 00:07:27.383 Nvme0n1 : 1.01 1931.04 120.69 0.00 0.00 32433.04 2007.04 30365.01 00:07:27.383 =================================================================================================================== 00:07:27.383 Total : 1931.04 120.69 0.00 0.00 32433.04 2007.04 30365.01 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.644 rmmod nvme_tcp 00:07:27.644 rmmod nvme_fabrics 00:07:27.644 rmmod nvme_keyring 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 2923586 ']' 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 2923586 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2923586 ']' 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2923586 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2923586 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2923586' 00:07:27.644 killing process with pid 2923586 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2923586 00:07:27.644 15:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2923586 00:07:27.644 [2024-10-01 15:25:07.085339] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.905 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:29.821 00:07:29.821 real 0m14.995s 00:07:29.821 user 0m23.620s 00:07:29.821 sys 0m7.008s 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.821 ************************************ 00:07:29.821 END TEST nvmf_host_management 00:07:29.821 ************************************ 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.821 15:25:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.083 ************************************ 00:07:30.083 START TEST nvmf_lvol 00:07:30.083 ************************************ 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.083 * Looking for test storage... 00:07:30.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.083 --rc genhtml_branch_coverage=1 00:07:30.083 --rc genhtml_function_coverage=1 00:07:30.083 --rc genhtml_legend=1 00:07:30.083 --rc geninfo_all_blocks=1 00:07:30.083 --rc geninfo_unexecuted_blocks=1 00:07:30.083 00:07:30.083 ' 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.083 --rc genhtml_branch_coverage=1 00:07:30.083 --rc genhtml_function_coverage=1 00:07:30.083 --rc genhtml_legend=1 00:07:30.083 --rc geninfo_all_blocks=1 00:07:30.083 --rc geninfo_unexecuted_blocks=1 00:07:30.083 00:07:30.083 ' 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.083 --rc genhtml_branch_coverage=1 00:07:30.083 --rc genhtml_function_coverage=1 00:07:30.083 --rc genhtml_legend=1 00:07:30.083 --rc geninfo_all_blocks=1 00:07:30.083 --rc geninfo_unexecuted_blocks=1 00:07:30.083 00:07:30.083 ' 00:07:30.083 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.083 --rc genhtml_branch_coverage=1 00:07:30.083 --rc genhtml_function_coverage=1 00:07:30.083 --rc genhtml_legend=1 00:07:30.083 --rc geninfo_all_blocks=1 00:07:30.083 --rc geninfo_unexecuted_blocks=1 00:07:30.083 00:07:30.083 ' 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.084 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.218 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.218 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:38.218 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:38.218 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:38.218 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:38.218 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:38.219 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:38.219 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:38.219 Found net devices under 0000:31:00.0: cvl_0_0 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:38.219 Found net devices under 0000:31:00.1: cvl_0_1 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.219 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:38.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:07:38.219 00:07:38.219 --- 10.0.0.2 ping statistics --- 00:07:38.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.219 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:07:38.219 00:07:38.219 --- 10.0.0.1 ping statistics --- 00:07:38.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.219 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=2928871 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 2928871 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2928871 ']' 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.219 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.219 [2024-10-01 15:25:17.319793] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:38.219 [2024-10-01 15:25:17.319857] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.219 [2024-10-01 15:25:17.361917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.219 [2024-10-01 15:25:17.411706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.219 [2024-10-01 15:25:17.458066] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.219 [2024-10-01 15:25:17.458119] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.219 [2024-10-01 15:25:17.458127] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.219 [2024-10-01 15:25:17.458134] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.219 [2024-10-01 15:25:17.458140] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.219 [2024-10-01 15:25:17.458307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.219 [2024-10-01 15:25:17.458461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.219 [2024-10-01 15:25:17.458462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.789 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.789 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:38.789 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:38.789 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.789 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.789 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.789 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:39.049 [2024-10-01 15:25:18.361322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.049 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:39.309 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:39.309 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:39.570 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:39.570 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:39.831 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:39.831 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7ac633d3-0b49-4abb-bc18-e9ec87a91e97 00:07:39.831 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7ac633d3-0b49-4abb-bc18-e9ec87a91e97 lvol 20 00:07:40.092 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1d81f946-cbe2-4a50-9329-a60be34350eb 00:07:40.092 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:40.353 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d81f946-cbe2-4a50-9329-a60be34350eb 00:07:40.614 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:40.614 [2024-10-01 15:25:19.974463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.614 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.875 15:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2929569 00:07:40.875 15:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:40.875 15:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:41.816 15:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1d81f946-cbe2-4a50-9329-a60be34350eb MY_SNAPSHOT 00:07:42.077 15:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8c126c6c-e78e-4d07-92e9-2c386e2a6d38 00:07:42.077 15:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1d81f946-cbe2-4a50-9329-a60be34350eb 30 00:07:42.338 15:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8c126c6c-e78e-4d07-92e9-2c386e2a6d38 MY_CLONE 00:07:42.598 15:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=50de080a-6aea-4059-a945-7cfb4dcb0941 00:07:42.598 15:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 50de080a-6aea-4059-a945-7cfb4dcb0941 00:07:42.859 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2929569 00:07:50.993 Initializing NVMe Controllers 00:07:50.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:50.993 Controller IO queue size 128, less than required. 00:07:50.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:50.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:50.993 Initialization complete. Launching workers. 00:07:50.993 ======================================================== 00:07:50.993 Latency(us) 00:07:50.993 Device Information : IOPS MiB/s Average min max 00:07:50.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16672.10 65.13 7679.04 1569.93 44658.45 00:07:50.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17241.70 67.35 7423.87 2526.75 36425.10 00:07:50.993 ======================================================== 00:07:50.993 Total : 33913.80 132.48 7549.31 1569.93 44658.45 00:07:50.993 00:07:51.253 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.253 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d81f946-cbe2-4a50-9329-a60be34350eb 00:07:51.514 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ac633d3-0b49-4abb-bc18-e9ec87a91e97 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.774 rmmod nvme_tcp 00:07:51.774 rmmod nvme_fabrics 00:07:51.774 rmmod nvme_keyring 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 2928871 ']' 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 2928871 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2928871 ']' 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2928871 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2928871 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2928871' 00:07:51.774 killing process with pid 2928871 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2928871 00:07:51.774 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2928871 00:07:52.033 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.034 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.945 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.945 00:07:53.945 real 0m24.092s 00:07:53.945 user 1m4.627s 00:07:53.945 sys 0m8.803s 00:07:53.945 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.945 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.945 ************************************ 00:07:53.945 END TEST nvmf_lvol 00:07:53.945 ************************************ 00:07:54.205 15:25:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:54.205 15:25:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:54.205 15:25:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.205 15:25:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.205 ************************************ 00:07:54.205 START TEST nvmf_lvs_grow 00:07:54.205 ************************************ 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:54.206 * Looking for test storage... 00:07:54.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.206 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.467 --rc genhtml_branch_coverage=1 00:07:54.467 --rc genhtml_function_coverage=1 00:07:54.467 --rc genhtml_legend=1 00:07:54.467 --rc geninfo_all_blocks=1 00:07:54.467 --rc geninfo_unexecuted_blocks=1 00:07:54.467 00:07:54.467 ' 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.467 --rc genhtml_branch_coverage=1 00:07:54.467 --rc genhtml_function_coverage=1 00:07:54.467 --rc genhtml_legend=1 00:07:54.467 --rc geninfo_all_blocks=1 00:07:54.467 --rc geninfo_unexecuted_blocks=1 00:07:54.467 00:07:54.467 ' 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.467 --rc genhtml_branch_coverage=1 00:07:54.467 --rc genhtml_function_coverage=1 00:07:54.467 --rc genhtml_legend=1 00:07:54.467 --rc geninfo_all_blocks=1 00:07:54.467 --rc geninfo_unexecuted_blocks=1 00:07:54.467 00:07:54.467 ' 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.467 --rc genhtml_branch_coverage=1 00:07:54.467 --rc genhtml_function_coverage=1 00:07:54.467 --rc genhtml_legend=1 00:07:54.467 --rc geninfo_all_blocks=1 00:07:54.467 --rc geninfo_unexecuted_blocks=1 00:07:54.467 00:07:54.467 ' 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.467 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.468 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:02.608 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:02.609 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:02.609 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:02.609 Found net devices under 0000:31:00.0: cvl_0_0 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:02.609 Found net devices under 0000:31:00.1: cvl_0_1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:08:02.609 00:08:02.609 --- 10.0.0.2 ping statistics --- 00:08:02.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.609 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:08:02.609 00:08:02.609 --- 10.0.0.1 ping statistics --- 00:08:02.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.609 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=2936008 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 2936008 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2936008 ']' 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.609 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.610 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.610 [2024-10-01 15:25:41.463110] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:02.610 [2024-10-01 15:25:41.463170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.610 [2024-10-01 15:25:41.504297] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.610 [2024-10-01 15:25:41.552714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.610 [2024-10-01 15:25:41.598044] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.610 [2024-10-01 15:25:41.598099] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.610 [2024-10-01 15:25:41.598107] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.610 [2024-10-01 15:25:41.598114] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.610 [2024-10-01 15:25:41.598120] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.610 [2024-10-01 15:25:41.598146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.870 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.870 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:02.870 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:02.870 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.870 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.131 [2024-10-01 15:25:42.502048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.131 ************************************ 00:08:03.131 START TEST lvs_grow_clean 00:08:03.131 ************************************ 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.131 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.391 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.391 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:03.391 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:03.653 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:03.653 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:03.653 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.913 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.913 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.913 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12a18a53-3dc8-468d-b1fa-181538ce1470 lvol 150 00:08:03.913 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=67254445-1eff-4874-a763-ff6d70c43314 00:08:03.913 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.913 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:04.174 [2024-10-01 15:25:43.515817] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:04.174 [2024-10-01 15:25:43.515892] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:04.174 true 00:08:04.174 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:04.174 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:04.435 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:04.435 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.435 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67254445-1eff-4874-a763-ff6d70c43314 00:08:04.696 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.957 [2024-10-01 15:25:44.230137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.957 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2936720 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2936720 /var/tmp/bdevperf.sock 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2936720 ']' 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:05.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.217 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:05.217 [2024-10-01 15:25:44.500296] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:05.217 [2024-10-01 15:25:44.500367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936720 ] 00:08:05.217 [2024-10-01 15:25:44.534934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.217 [2024-10-01 15:25:44.583302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.217 [2024-10-01 15:25:44.629921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.159 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.159 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:06.159 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:06.159 Nvme0n1 00:08:06.159 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:06.419 [ 00:08:06.419 { 00:08:06.419 "name": "Nvme0n1", 00:08:06.419 "aliases": [ 00:08:06.419 "67254445-1eff-4874-a763-ff6d70c43314" 00:08:06.419 ], 00:08:06.419 "product_name": "NVMe disk", 00:08:06.419 "block_size": 4096, 00:08:06.419 "num_blocks": 38912, 00:08:06.419 "uuid": "67254445-1eff-4874-a763-ff6d70c43314", 00:08:06.419 "numa_id": 0, 00:08:06.419 "assigned_rate_limits": { 00:08:06.419 "rw_ios_per_sec": 0, 00:08:06.419 "rw_mbytes_per_sec": 0, 00:08:06.419 "r_mbytes_per_sec": 0, 00:08:06.419 "w_mbytes_per_sec": 0 00:08:06.419 }, 00:08:06.419 "claimed": false, 00:08:06.419 "zoned": false, 00:08:06.419 "supported_io_types": { 00:08:06.419 "read": true, 00:08:06.419 "write": true, 00:08:06.419 "unmap": true, 00:08:06.419 "flush": true, 00:08:06.419 "reset": true, 00:08:06.419 "nvme_admin": true, 00:08:06.419 "nvme_io": true, 00:08:06.419 "nvme_io_md": false, 00:08:06.419 "write_zeroes": true, 00:08:06.419 "zcopy": false, 00:08:06.419 "get_zone_info": false, 00:08:06.419 "zone_management": false, 00:08:06.419 "zone_append": false, 00:08:06.419 "compare": true, 00:08:06.419 "compare_and_write": true, 00:08:06.419 "abort": true, 00:08:06.419 "seek_hole": false, 00:08:06.419 "seek_data": false, 00:08:06.419 "copy": true, 00:08:06.419 "nvme_iov_md": false 00:08:06.419 }, 00:08:06.419 "memory_domains": [ 00:08:06.419 { 00:08:06.419 "dma_device_id": "system", 00:08:06.419 "dma_device_type": 1 00:08:06.419 } 00:08:06.419 ], 00:08:06.419 "driver_specific": { 00:08:06.419 "nvme": [ 00:08:06.419 { 00:08:06.419 "trid": { 00:08:06.419 "trtype": "TCP", 00:08:06.419 "adrfam": "IPv4", 00:08:06.419 "traddr": "10.0.0.2", 00:08:06.419 "trsvcid": "4420", 00:08:06.419 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:06.419 }, 00:08:06.419 "ctrlr_data": { 00:08:06.419 "cntlid": 1, 00:08:06.419 "vendor_id": "0x8086", 00:08:06.419 "model_number": "SPDK bdev Controller", 00:08:06.419 "serial_number": "SPDK0", 00:08:06.419 "firmware_revision": "25.01", 00:08:06.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.419 "oacs": { 00:08:06.419 "security": 0, 00:08:06.419 "format": 0, 00:08:06.419 "firmware": 0, 00:08:06.419 "ns_manage": 0 00:08:06.420 }, 00:08:06.420 "multi_ctrlr": true, 00:08:06.420 "ana_reporting": false 00:08:06.420 }, 00:08:06.420 "vs": { 00:08:06.420 "nvme_version": "1.3" 00:08:06.420 }, 00:08:06.420 "ns_data": { 00:08:06.420 "id": 1, 00:08:06.420 "can_share": true 00:08:06.420 } 00:08:06.420 } 00:08:06.420 ], 00:08:06.420 "mp_policy": "active_passive" 00:08:06.420 } 00:08:06.420 } 00:08:06.420 ] 00:08:06.420 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2936970 00:08:06.420 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:06.420 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:06.420 Running I/O for 10 seconds... 00:08:07.803 Latency(us) 00:08:07.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.803 Nvme0n1 : 1.00 24504.00 95.72 0.00 0.00 0.00 0.00 0.00 00:08:07.803 =================================================================================================================== 00:08:07.803 Total : 24504.00 95.72 0.00 0.00 0.00 0.00 0.00 00:08:07.803 00:08:08.375 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:08.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.636 Nvme0n1 : 2.00 24891.00 97.23 0.00 0.00 0.00 0.00 0.00 00:08:08.636 =================================================================================================================== 00:08:08.636 Total : 24891.00 97.23 0.00 0.00 0.00 0.00 0.00 00:08:08.636 00:08:08.636 true 00:08:08.636 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:08.636 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:08.896 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:08.896 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:08.896 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2936970 00:08:09.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.465 Nvme0n1 : 3.00 25037.67 97.80 0.00 0.00 0.00 0.00 0.00 00:08:09.465 =================================================================================================================== 00:08:09.465 Total : 25037.67 97.80 0.00 0.00 0.00 0.00 0.00 00:08:09.465 00:08:10.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.448 Nvme0n1 : 4.00 25128.25 98.16 0.00 0.00 0.00 0.00 0.00 00:08:10.448 =================================================================================================================== 00:08:10.448 Total : 25128.25 98.16 0.00 0.00 0.00 0.00 0.00 00:08:10.448 00:08:11.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.828 Nvme0n1 : 5.00 25182.40 98.37 0.00 0.00 0.00 0.00 0.00 00:08:11.828 =================================================================================================================== 00:08:11.828 Total : 25182.40 98.37 0.00 0.00 0.00 0.00 0.00 00:08:11.828 00:08:12.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.768 Nvme0n1 : 6.00 25230.67 98.56 0.00 0.00 0.00 0.00 0.00 00:08:12.768 =================================================================================================================== 00:08:12.768 Total : 25230.67 98.56 0.00 0.00 0.00 0.00 0.00 00:08:12.768 00:08:13.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.709 Nvme0n1 : 7.00 25257.71 98.66 0.00 0.00 0.00 0.00 0.00 00:08:13.709 =================================================================================================================== 00:08:13.709 Total : 25257.71 98.66 0.00 0.00 0.00 0.00 0.00 00:08:13.709 00:08:14.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.647 Nvme0n1 : 8.00 25290.00 98.79 0.00 0.00 0.00 0.00 0.00 00:08:14.647 =================================================================================================================== 00:08:14.647 Total : 25290.00 98.79 0.00 0.00 0.00 0.00 0.00 00:08:14.647 00:08:15.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.586 Nvme0n1 : 9.00 25310.11 98.87 0.00 0.00 0.00 0.00 0.00 00:08:15.586 =================================================================================================================== 00:08:15.586 Total : 25310.11 98.87 0.00 0.00 0.00 0.00 0.00 00:08:15.586 00:08:16.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.526 Nvme0n1 : 10.00 25326.40 98.93 0.00 0.00 0.00 0.00 0.00 00:08:16.526 =================================================================================================================== 00:08:16.526 Total : 25326.40 98.93 0.00 0.00 0.00 0.00 0.00 00:08:16.526 00:08:16.526 00:08:16.526 Latency(us) 00:08:16.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.526 Nvme0n1 : 10.00 25325.01 98.93 0.00 0.00 5050.77 2484.91 18240.85 00:08:16.526 =================================================================================================================== 00:08:16.526 Total : 25325.01 98.93 0.00 0.00 5050.77 2484.91 18240.85 00:08:16.526 { 00:08:16.526 "results": [ 00:08:16.526 { 00:08:16.526 "job": "Nvme0n1", 00:08:16.526 "core_mask": "0x2", 00:08:16.526 "workload": "randwrite", 00:08:16.526 "status": "finished", 00:08:16.526 "queue_depth": 128, 00:08:16.526 "io_size": 4096, 00:08:16.526 "runtime": 10.003037, 00:08:16.526 "iops": 25325.00879482901, 00:08:16.526 "mibps": 98.92581560480082, 00:08:16.526 "io_failed": 0, 00:08:16.526 "io_timeout": 0, 00:08:16.526 "avg_latency_us": 5050.774845371135, 00:08:16.526 "min_latency_us": 2484.9066666666668, 00:08:16.526 "max_latency_us": 18240.853333333333 00:08:16.526 } 00:08:16.526 ], 00:08:16.526 "core_count": 1 00:08:16.526 } 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2936720 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2936720 ']' 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2936720 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2936720 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2936720' 00:08:16.526 killing process with pid 2936720 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2936720 00:08:16.526 Received shutdown signal, test time was about 10.000000 seconds 00:08:16.526 00:08:16.526 Latency(us) 00:08:16.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.526 =================================================================================================================== 00:08:16.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:16.526 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2936720 00:08:16.786 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.046 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.046 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:17.046 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:17.305 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:17.305 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:17.305 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.306 [2024-10-01 15:25:56.744040] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:17.565 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:17.566 request: 00:08:17.566 { 00:08:17.566 "uuid": "12a18a53-3dc8-468d-b1fa-181538ce1470", 00:08:17.566 "method": "bdev_lvol_get_lvstores", 00:08:17.566 "req_id": 1 00:08:17.566 } 00:08:17.566 Got JSON-RPC error response 00:08:17.566 response: 00:08:17.566 { 00:08:17.566 "code": -19, 00:08:17.566 "message": "No such device" 00:08:17.566 } 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.566 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.826 aio_bdev 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67254445-1eff-4874-a763-ff6d70c43314 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=67254445-1eff-4874-a763-ff6d70c43314 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.826 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67254445-1eff-4874-a763-ff6d70c43314 -t 2000 00:08:18.086 [ 00:08:18.086 { 00:08:18.086 "name": "67254445-1eff-4874-a763-ff6d70c43314", 00:08:18.086 "aliases": [ 00:08:18.086 "lvs/lvol" 00:08:18.086 ], 00:08:18.086 "product_name": "Logical Volume", 00:08:18.086 "block_size": 4096, 00:08:18.086 "num_blocks": 38912, 00:08:18.086 "uuid": "67254445-1eff-4874-a763-ff6d70c43314", 00:08:18.086 "assigned_rate_limits": { 00:08:18.086 "rw_ios_per_sec": 0, 00:08:18.086 "rw_mbytes_per_sec": 0, 00:08:18.086 "r_mbytes_per_sec": 0, 00:08:18.086 "w_mbytes_per_sec": 0 00:08:18.086 }, 00:08:18.086 "claimed": false, 00:08:18.086 "zoned": false, 00:08:18.086 "supported_io_types": { 00:08:18.086 "read": true, 00:08:18.086 "write": true, 00:08:18.086 "unmap": true, 00:08:18.086 "flush": false, 00:08:18.086 "reset": true, 00:08:18.086 "nvme_admin": false, 00:08:18.086 "nvme_io": false, 00:08:18.086 "nvme_io_md": false, 00:08:18.086 "write_zeroes": true, 00:08:18.086 "zcopy": false, 00:08:18.086 "get_zone_info": false, 00:08:18.086 "zone_management": false, 00:08:18.086 "zone_append": false, 00:08:18.086 "compare": false, 00:08:18.086 "compare_and_write": false, 00:08:18.086 "abort": false, 00:08:18.086 "seek_hole": true, 00:08:18.086 "seek_data": true, 00:08:18.086 "copy": false, 00:08:18.086 "nvme_iov_md": false 00:08:18.086 }, 00:08:18.086 "driver_specific": { 00:08:18.086 "lvol": { 00:08:18.086 "lvol_store_uuid": "12a18a53-3dc8-468d-b1fa-181538ce1470", 00:08:18.086 "base_bdev": "aio_bdev", 00:08:18.086 "thin_provision": false, 00:08:18.086 "num_allocated_clusters": 38, 00:08:18.086 "snapshot": false, 00:08:18.086 "clone": false, 00:08:18.086 "esnap_clone": false 00:08:18.086 } 00:08:18.086 } 00:08:18.086 } 00:08:18.086 ] 00:08:18.086 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:18.086 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:18.086 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:18.347 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:18.347 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:18.347 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:18.347 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:18.347 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67254445-1eff-4874-a763-ff6d70c43314 00:08:18.607 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12a18a53-3dc8-468d-b1fa-181538ce1470 00:08:18.867 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.867 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.867 00:08:18.867 real 0m15.723s 00:08:18.867 user 0m15.490s 00:08:18.867 sys 0m1.345s 00:08:18.867 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.867 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:18.867 ************************************ 00:08:18.867 END TEST lvs_grow_clean 00:08:18.867 ************************************ 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:19.128 ************************************ 00:08:19.128 START TEST lvs_grow_dirty 00:08:19.128 ************************************ 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:19.128 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:19.388 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:19.388 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:19.388 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:19.648 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:19.648 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:19.648 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 lvol 150 00:08:19.648 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7c60705e-e9c7-424d-9fc7-89e1fc2390a1 00:08:19.648 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.909 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:19.909 [2024-10-01 15:25:59.246558] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:19.909 [2024-10-01 15:25:59.246602] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:19.909 true 00:08:19.909 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:19.909 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:20.169 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:20.169 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.169 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c60705e-e9c7-424d-9fc7-89e1fc2390a1 00:08:20.429 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:20.689 [2024-10-01 15:25:59.916497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.689 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2939819 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2939819 /var/tmp/bdevperf.sock 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2939819 ']' 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.690 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.950 [2024-10-01 15:26:00.144752] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:20.950 [2024-10-01 15:26:00.144824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939819 ] 00:08:20.950 [2024-10-01 15:26:00.176626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.950 [2024-10-01 15:26:00.223153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.950 [2024-10-01 15:26:00.251599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.521 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.521 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:21.521 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:21.782 Nvme0n1 00:08:21.782 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:22.042 [ 00:08:22.042 { 00:08:22.042 "name": "Nvme0n1", 00:08:22.043 "aliases": [ 00:08:22.043 "7c60705e-e9c7-424d-9fc7-89e1fc2390a1" 00:08:22.043 ], 00:08:22.043 "product_name": "NVMe disk", 00:08:22.043 "block_size": 4096, 00:08:22.043 "num_blocks": 38912, 00:08:22.043 "uuid": "7c60705e-e9c7-424d-9fc7-89e1fc2390a1", 00:08:22.043 "numa_id": 0, 00:08:22.043 "assigned_rate_limits": { 00:08:22.043 "rw_ios_per_sec": 0, 00:08:22.043 "rw_mbytes_per_sec": 0, 00:08:22.043 "r_mbytes_per_sec": 0, 00:08:22.043 "w_mbytes_per_sec": 0 00:08:22.043 }, 00:08:22.043 "claimed": false, 00:08:22.043 "zoned": false, 00:08:22.043 "supported_io_types": { 00:08:22.043 "read": true, 00:08:22.043 "write": true, 00:08:22.043 "unmap": true, 00:08:22.043 "flush": true, 00:08:22.043 "reset": true, 00:08:22.043 "nvme_admin": true, 00:08:22.043 "nvme_io": true, 00:08:22.043 "nvme_io_md": false, 00:08:22.043 "write_zeroes": true, 00:08:22.043 "zcopy": false, 00:08:22.043 "get_zone_info": false, 00:08:22.043 "zone_management": false, 00:08:22.043 "zone_append": false, 00:08:22.043 "compare": true, 00:08:22.043 "compare_and_write": true, 00:08:22.043 "abort": true, 00:08:22.043 "seek_hole": false, 00:08:22.043 "seek_data": false, 00:08:22.043 "copy": true, 00:08:22.043 "nvme_iov_md": false 00:08:22.043 }, 00:08:22.043 "memory_domains": [ 00:08:22.043 { 00:08:22.043 "dma_device_id": "system", 00:08:22.043 "dma_device_type": 1 00:08:22.043 } 00:08:22.043 ], 00:08:22.043 "driver_specific": { 00:08:22.043 "nvme": [ 00:08:22.043 { 00:08:22.043 "trid": { 00:08:22.043 "trtype": "TCP", 00:08:22.043 "adrfam": "IPv4", 00:08:22.043 "traddr": "10.0.0.2", 00:08:22.043 "trsvcid": "4420", 00:08:22.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:22.043 }, 00:08:22.043 "ctrlr_data": { 00:08:22.043 "cntlid": 1, 00:08:22.043 "vendor_id": "0x8086", 00:08:22.043 "model_number": "SPDK bdev Controller", 00:08:22.043 "serial_number": "SPDK0", 00:08:22.043 "firmware_revision": "25.01", 00:08:22.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.043 "oacs": { 00:08:22.043 "security": 0, 00:08:22.043 "format": 0, 00:08:22.043 "firmware": 0, 00:08:22.043 "ns_manage": 0 00:08:22.043 }, 00:08:22.043 "multi_ctrlr": true, 00:08:22.043 "ana_reporting": false 00:08:22.043 }, 00:08:22.043 "vs": { 00:08:22.043 "nvme_version": "1.3" 00:08:22.043 }, 00:08:22.043 "ns_data": { 00:08:22.043 "id": 1, 00:08:22.043 "can_share": true 00:08:22.043 } 00:08:22.043 } 00:08:22.043 ], 00:08:22.043 "mp_policy": "active_passive" 00:08:22.043 } 00:08:22.043 } 00:08:22.043 ] 00:08:22.043 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2940153 00:08:22.043 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:22.043 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:22.043 Running I/O for 10 seconds... 00:08:23.423 Latency(us) 00:08:23.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.423 Nvme0n1 : 1.00 24888.00 97.22 0.00 0.00 0.00 0.00 0.00 00:08:23.423 =================================================================================================================== 00:08:23.423 Total : 24888.00 97.22 0.00 0.00 0.00 0.00 0.00 00:08:23.423 00:08:23.994 15:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:24.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.255 Nvme0n1 : 2.00 25092.00 98.02 0.00 0.00 0.00 0.00 0.00 00:08:24.255 =================================================================================================================== 00:08:24.255 Total : 25092.00 98.02 0.00 0.00 0.00 0.00 0.00 00:08:24.255 00:08:24.255 true 00:08:24.255 15:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:24.255 15:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:24.516 15:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:24.516 15:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:24.516 15:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2940153 00:08:25.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.086 Nvme0n1 : 3.00 25170.33 98.32 0.00 0.00 0.00 0.00 0.00 00:08:25.086 =================================================================================================================== 00:08:25.086 Total : 25170.33 98.32 0.00 0.00 0.00 0.00 0.00 00:08:25.086 00:08:26.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.027 Nvme0n1 : 4.00 25229.50 98.55 0.00 0.00 0.00 0.00 0.00 00:08:26.027 =================================================================================================================== 00:08:26.027 Total : 25229.50 98.55 0.00 0.00 0.00 0.00 0.00 00:08:26.027 00:08:27.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.410 Nvme0n1 : 5.00 25265.20 98.69 0.00 0.00 0.00 0.00 0.00 00:08:27.410 =================================================================================================================== 00:08:27.410 Total : 25265.20 98.69 0.00 0.00 0.00 0.00 0.00 00:08:27.410 00:08:28.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.349 Nvme0n1 : 6.00 25299.67 98.83 0.00 0.00 0.00 0.00 0.00 00:08:28.349 =================================================================================================================== 00:08:28.349 Total : 25299.67 98.83 0.00 0.00 0.00 0.00 0.00 00:08:28.349 00:08:29.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.288 Nvme0n1 : 7.00 25323.86 98.92 0.00 0.00 0.00 0.00 0.00 00:08:29.288 =================================================================================================================== 00:08:29.288 Total : 25323.86 98.92 0.00 0.00 0.00 0.00 0.00 00:08:29.288 00:08:30.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.228 Nvme0n1 : 8.00 25341.88 98.99 0.00 0.00 0.00 0.00 0.00 00:08:30.228 =================================================================================================================== 00:08:30.228 Total : 25341.88 98.99 0.00 0.00 0.00 0.00 0.00 00:08:30.228 00:08:31.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.169 Nvme0n1 : 9.00 25350.89 99.03 0.00 0.00 0.00 0.00 0.00 00:08:31.169 =================================================================================================================== 00:08:31.169 Total : 25350.89 99.03 0.00 0.00 0.00 0.00 0.00 00:08:31.169 00:08:32.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.111 Nvme0n1 : 10.00 25362.90 99.07 0.00 0.00 0.00 0.00 0.00 00:08:32.111 =================================================================================================================== 00:08:32.111 Total : 25362.90 99.07 0.00 0.00 0.00 0.00 0.00 00:08:32.111 00:08:32.111 00:08:32.111 Latency(us) 00:08:32.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.111 Nvme0n1 : 10.00 25365.47 99.08 0.00 0.00 5043.29 3031.04 15291.73 00:08:32.111 =================================================================================================================== 00:08:32.111 Total : 25365.47 99.08 0.00 0.00 5043.29 3031.04 15291.73 00:08:32.111 { 00:08:32.111 "results": [ 00:08:32.111 { 00:08:32.111 "job": "Nvme0n1", 00:08:32.111 "core_mask": "0x2", 00:08:32.111 "workload": "randwrite", 00:08:32.111 "status": "finished", 00:08:32.111 "queue_depth": 128, 00:08:32.111 "io_size": 4096, 00:08:32.111 "runtime": 10.003361, 00:08:32.111 "iops": 25365.47466396544, 00:08:32.111 "mibps": 99.083885406115, 00:08:32.111 "io_failed": 0, 00:08:32.111 "io_timeout": 0, 00:08:32.111 "avg_latency_us": 5043.289564015659, 00:08:32.111 "min_latency_us": 3031.04, 00:08:32.111 "max_latency_us": 15291.733333333334 00:08:32.111 } 00:08:32.111 ], 00:08:32.111 "core_count": 1 00:08:32.111 } 00:08:32.111 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2939819 00:08:32.111 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2939819 ']' 00:08:32.111 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2939819 00:08:32.111 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:32.111 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.111 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2939819 00:08:32.372 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:32.372 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:32.372 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2939819' 00:08:32.372 killing process with pid 2939819 00:08:32.372 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2939819 00:08:32.372 Received shutdown signal, test time was about 10.000000 seconds 00:08:32.372 00:08:32.372 Latency(us) 00:08:32.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.372 =================================================================================================================== 00:08:32.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:32.373 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2939819 00:08:32.373 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.634 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.634 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:32.634 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2936008 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2936008 00:08:32.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2936008 Killed "${NVMF_APP[@]}" "$@" 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=2942186 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 2942186 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2942186 ']' 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.895 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.895 [2024-10-01 15:26:12.287804] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:32.895 [2024-10-01 15:26:12.287859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.895 [2024-10-01 15:26:12.324717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.155 [2024-10-01 15:26:12.371133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.155 [2024-10-01 15:26:12.399218] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.155 [2024-10-01 15:26:12.399250] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.155 [2024-10-01 15:26:12.399256] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.155 [2024-10-01 15:26:12.399261] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.155 [2024-10-01 15:26:12.399266] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.155 [2024-10-01 15:26:12.399280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.726 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.726 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:33.726 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:33.726 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.726 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.726 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.726 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.987 [2024-10-01 15:26:13.259349] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:33.987 [2024-10-01 15:26:13.259416] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:33.987 [2024-10-01 15:26:13.259438] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7c60705e-e9c7-424d-9fc7-89e1fc2390a1 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=7c60705e-e9c7-424d-9fc7-89e1fc2390a1 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:33.987 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7c60705e-e9c7-424d-9fc7-89e1fc2390a1 -t 2000 00:08:34.247 [ 00:08:34.247 { 00:08:34.247 "name": "7c60705e-e9c7-424d-9fc7-89e1fc2390a1", 00:08:34.247 "aliases": [ 00:08:34.247 "lvs/lvol" 00:08:34.247 ], 00:08:34.247 "product_name": "Logical Volume", 00:08:34.247 "block_size": 4096, 00:08:34.247 "num_blocks": 38912, 00:08:34.247 "uuid": "7c60705e-e9c7-424d-9fc7-89e1fc2390a1", 00:08:34.247 "assigned_rate_limits": { 00:08:34.247 "rw_ios_per_sec": 0, 00:08:34.247 "rw_mbytes_per_sec": 0, 00:08:34.247 "r_mbytes_per_sec": 0, 00:08:34.247 "w_mbytes_per_sec": 0 00:08:34.247 }, 00:08:34.247 "claimed": false, 00:08:34.247 "zoned": false, 00:08:34.247 "supported_io_types": { 00:08:34.247 "read": true, 00:08:34.247 "write": true, 00:08:34.247 "unmap": true, 00:08:34.247 "flush": false, 00:08:34.248 "reset": true, 00:08:34.248 "nvme_admin": false, 00:08:34.248 "nvme_io": false, 00:08:34.248 "nvme_io_md": false, 00:08:34.248 "write_zeroes": true, 00:08:34.248 "zcopy": false, 00:08:34.248 "get_zone_info": false, 00:08:34.248 "zone_management": false, 00:08:34.248 "zone_append": false, 00:08:34.248 "compare": false, 00:08:34.248 "compare_and_write": false, 00:08:34.248 "abort": false, 00:08:34.248 "seek_hole": true, 00:08:34.248 "seek_data": true, 00:08:34.248 "copy": false, 00:08:34.248 "nvme_iov_md": false 00:08:34.248 }, 00:08:34.248 "driver_specific": { 00:08:34.248 "lvol": { 00:08:34.248 "lvol_store_uuid": "e0c54376-dca6-40ea-b067-50eb9d1bb1b8", 00:08:34.248 "base_bdev": "aio_bdev", 00:08:34.248 "thin_provision": false, 00:08:34.248 "num_allocated_clusters": 38, 00:08:34.248 "snapshot": false, 00:08:34.248 "clone": false, 00:08:34.248 "esnap_clone": false 00:08:34.248 } 00:08:34.248 } 00:08:34.248 } 00:08:34.248 ] 00:08:34.248 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:34.248 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:34.248 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:34.508 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:34.508 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:34.509 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:34.509 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:34.509 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.770 [2024-10-01 15:26:14.088074] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:34.770 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:35.031 request: 00:08:35.031 { 00:08:35.031 "uuid": "e0c54376-dca6-40ea-b067-50eb9d1bb1b8", 00:08:35.031 "method": "bdev_lvol_get_lvstores", 00:08:35.031 "req_id": 1 00:08:35.031 } 00:08:35.031 Got JSON-RPC error response 00:08:35.031 response: 00:08:35.031 { 00:08:35.031 "code": -19, 00:08:35.031 "message": "No such device" 00:08:35.031 } 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.031 aio_bdev 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7c60705e-e9c7-424d-9fc7-89e1fc2390a1 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=7c60705e-e9c7-424d-9fc7-89e1fc2390a1 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.031 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:35.291 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7c60705e-e9c7-424d-9fc7-89e1fc2390a1 -t 2000 00:08:35.552 [ 00:08:35.552 { 00:08:35.552 "name": "7c60705e-e9c7-424d-9fc7-89e1fc2390a1", 00:08:35.552 "aliases": [ 00:08:35.552 "lvs/lvol" 00:08:35.552 ], 00:08:35.552 "product_name": "Logical Volume", 00:08:35.552 "block_size": 4096, 00:08:35.552 "num_blocks": 38912, 00:08:35.552 "uuid": "7c60705e-e9c7-424d-9fc7-89e1fc2390a1", 00:08:35.552 "assigned_rate_limits": { 00:08:35.552 "rw_ios_per_sec": 0, 00:08:35.552 "rw_mbytes_per_sec": 0, 00:08:35.552 "r_mbytes_per_sec": 0, 00:08:35.552 "w_mbytes_per_sec": 0 00:08:35.552 }, 00:08:35.552 "claimed": false, 00:08:35.552 "zoned": false, 00:08:35.552 "supported_io_types": { 00:08:35.552 "read": true, 00:08:35.552 "write": true, 00:08:35.552 "unmap": true, 00:08:35.552 "flush": false, 00:08:35.552 "reset": true, 00:08:35.552 "nvme_admin": false, 00:08:35.552 "nvme_io": false, 00:08:35.552 "nvme_io_md": false, 00:08:35.552 "write_zeroes": true, 00:08:35.552 "zcopy": false, 00:08:35.552 "get_zone_info": false, 00:08:35.552 "zone_management": false, 00:08:35.552 "zone_append": false, 00:08:35.552 "compare": false, 00:08:35.552 "compare_and_write": false, 00:08:35.552 "abort": false, 00:08:35.552 "seek_hole": true, 00:08:35.552 "seek_data": true, 00:08:35.552 "copy": false, 00:08:35.552 "nvme_iov_md": false 00:08:35.552 }, 00:08:35.552 "driver_specific": { 00:08:35.552 "lvol": { 00:08:35.552 "lvol_store_uuid": "e0c54376-dca6-40ea-b067-50eb9d1bb1b8", 00:08:35.552 "base_bdev": "aio_bdev", 00:08:35.552 "thin_provision": false, 00:08:35.552 "num_allocated_clusters": 38, 00:08:35.552 "snapshot": false, 00:08:35.552 "clone": false, 00:08:35.552 "esnap_clone": false 00:08:35.552 } 00:08:35.552 } 00:08:35.552 } 00:08:35.552 ] 00:08:35.552 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:35.552 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:35.552 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:35.552 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:35.552 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:35.552 15:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:35.813 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:35.813 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c60705e-e9c7-424d-9fc7-89e1fc2390a1 00:08:36.073 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0c54376-dca6-40ea-b067-50eb9d1bb1b8 00:08:36.073 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.334 00:08:36.334 real 0m17.326s 00:08:36.334 user 0m45.451s 00:08:36.334 sys 0m2.980s 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:36.334 ************************************ 00:08:36.334 END TEST lvs_grow_dirty 00:08:36.334 ************************************ 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:36.334 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:36.334 nvmf_trace.0 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.595 rmmod nvme_tcp 00:08:36.595 rmmod nvme_fabrics 00:08:36.595 rmmod nvme_keyring 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 2942186 ']' 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 2942186 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2942186 ']' 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2942186 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2942186 00:08:36.595 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.596 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.596 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2942186' 00:08:36.596 killing process with pid 2942186 00:08:36.596 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2942186 00:08:36.596 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2942186 00:08:36.856 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:36.856 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:36.856 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:36.856 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:36.856 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:08:36.857 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:36.857 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:08:36.857 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.857 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.857 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.857 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.857 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.770 00:08:38.770 real 0m44.684s 00:08:38.770 user 1m7.386s 00:08:38.770 sys 0m10.590s 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.770 ************************************ 00:08:38.770 END TEST nvmf_lvs_grow 00:08:38.770 ************************************ 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.770 ************************************ 00:08:38.770 START TEST nvmf_bdev_io_wait 00:08:38.770 ************************************ 00:08:38.770 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:39.032 * Looking for test storage... 00:08:39.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:39.032 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.033 --rc genhtml_branch_coverage=1 00:08:39.033 --rc genhtml_function_coverage=1 00:08:39.033 --rc genhtml_legend=1 00:08:39.033 --rc geninfo_all_blocks=1 00:08:39.033 --rc geninfo_unexecuted_blocks=1 00:08:39.033 00:08:39.033 ' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.033 --rc genhtml_branch_coverage=1 00:08:39.033 --rc genhtml_function_coverage=1 00:08:39.033 --rc genhtml_legend=1 00:08:39.033 --rc geninfo_all_blocks=1 00:08:39.033 --rc geninfo_unexecuted_blocks=1 00:08:39.033 00:08:39.033 ' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.033 --rc genhtml_branch_coverage=1 00:08:39.033 --rc genhtml_function_coverage=1 00:08:39.033 --rc genhtml_legend=1 00:08:39.033 --rc geninfo_all_blocks=1 00:08:39.033 --rc geninfo_unexecuted_blocks=1 00:08:39.033 00:08:39.033 ' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.033 --rc genhtml_branch_coverage=1 00:08:39.033 --rc genhtml_function_coverage=1 00:08:39.033 --rc genhtml_legend=1 00:08:39.033 --rc geninfo_all_blocks=1 00:08:39.033 --rc geninfo_unexecuted_blocks=1 00:08:39.033 00:08:39.033 ' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.033 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.177 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.177 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.177 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:47.178 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:47.178 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:47.178 Found net devices under 0000:31:00.0: cvl_0_0 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:47.178 Found net devices under 0000:31:00.1: cvl_0_1 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.178 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.179 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.179 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.179 15:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:08:47.179 00:08:47.179 --- 10.0.0.2 ping statistics --- 00:08:47.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.179 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:08:47.179 00:08:47.179 --- 10.0.0.1 ping statistics --- 00:08:47.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.179 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=2947324 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 2947324 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2947324 ']' 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.179 15:26:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 [2024-10-01 15:26:26.274827] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:47.179 [2024-10-01 15:26:26.274890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.179 [2024-10-01 15:26:26.316931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.179 [2024-10-01 15:26:26.364935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.179 [2024-10-01 15:26:26.413530] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.179 [2024-10-01 15:26:26.413585] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.179 [2024-10-01 15:26:26.413594] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.179 [2024-10-01 15:26:26.413601] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.179 [2024-10-01 15:26:26.413607] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.179 [2024-10-01 15:26:26.413768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.179 [2024-10-01 15:26:26.413942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.179 [2024-10-01 15:26:26.414039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.179 [2024-10-01 15:26:26.414040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.751 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.013 [2024-10-01 15:26:27.225215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.013 Malloc0 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.013 [2024-10-01 15:26:27.312122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2947654 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2947656 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:48.013 { 00:08:48.013 "params": { 00:08:48.013 "name": "Nvme$subsystem", 00:08:48.013 "trtype": "$TEST_TRANSPORT", 00:08:48.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.013 "adrfam": "ipv4", 00:08:48.013 "trsvcid": "$NVMF_PORT", 00:08:48.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.013 "hdgst": ${hdgst:-false}, 00:08:48.013 "ddgst": ${ddgst:-false} 00:08:48.013 }, 00:08:48.013 "method": "bdev_nvme_attach_controller" 00:08:48.013 } 00:08:48.013 EOF 00:08:48.013 )") 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2947659 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:48.013 { 00:08:48.013 "params": { 00:08:48.013 "name": "Nvme$subsystem", 00:08:48.013 "trtype": "$TEST_TRANSPORT", 00:08:48.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.013 "adrfam": "ipv4", 00:08:48.013 "trsvcid": "$NVMF_PORT", 00:08:48.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.013 "hdgst": ${hdgst:-false}, 00:08:48.013 "ddgst": ${ddgst:-false} 00:08:48.013 }, 00:08:48.013 "method": "bdev_nvme_attach_controller" 00:08:48.013 } 00:08:48.013 EOF 00:08:48.013 )") 00:08:48.013 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2947663 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:48.014 { 00:08:48.014 "params": { 00:08:48.014 "name": "Nvme$subsystem", 00:08:48.014 "trtype": "$TEST_TRANSPORT", 00:08:48.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.014 "adrfam": "ipv4", 00:08:48.014 "trsvcid": "$NVMF_PORT", 00:08:48.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.014 "hdgst": ${hdgst:-false}, 00:08:48.014 "ddgst": ${ddgst:-false} 00:08:48.014 }, 00:08:48.014 "method": "bdev_nvme_attach_controller" 00:08:48.014 } 00:08:48.014 EOF 00:08:48.014 )") 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:48.014 { 00:08:48.014 "params": { 00:08:48.014 "name": "Nvme$subsystem", 00:08:48.014 "trtype": "$TEST_TRANSPORT", 00:08:48.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.014 "adrfam": "ipv4", 00:08:48.014 "trsvcid": "$NVMF_PORT", 00:08:48.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.014 "hdgst": ${hdgst:-false}, 00:08:48.014 "ddgst": ${ddgst:-false} 00:08:48.014 }, 00:08:48.014 "method": "bdev_nvme_attach_controller" 00:08:48.014 } 00:08:48.014 EOF 00:08:48.014 )") 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2947654 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:48.014 "params": { 00:08:48.014 "name": "Nvme1", 00:08:48.014 "trtype": "tcp", 00:08:48.014 "traddr": "10.0.0.2", 00:08:48.014 "adrfam": "ipv4", 00:08:48.014 "trsvcid": "4420", 00:08:48.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.014 "hdgst": false, 00:08:48.014 "ddgst": false 00:08:48.014 }, 00:08:48.014 "method": "bdev_nvme_attach_controller" 00:08:48.014 }' 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:48.014 "params": { 00:08:48.014 "name": "Nvme1", 00:08:48.014 "trtype": "tcp", 00:08:48.014 "traddr": "10.0.0.2", 00:08:48.014 "adrfam": "ipv4", 00:08:48.014 "trsvcid": "4420", 00:08:48.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.014 "hdgst": false, 00:08:48.014 "ddgst": false 00:08:48.014 }, 00:08:48.014 "method": "bdev_nvme_attach_controller" 00:08:48.014 }' 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:48.014 "params": { 00:08:48.014 "name": "Nvme1", 00:08:48.014 "trtype": "tcp", 00:08:48.014 "traddr": "10.0.0.2", 00:08:48.014 "adrfam": "ipv4", 00:08:48.014 "trsvcid": "4420", 00:08:48.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.014 "hdgst": false, 00:08:48.014 "ddgst": false 00:08:48.014 }, 00:08:48.014 "method": "bdev_nvme_attach_controller" 00:08:48.014 }' 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:48.014 15:26:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:48.014 "params": { 00:08:48.014 "name": "Nvme1", 00:08:48.014 "trtype": "tcp", 00:08:48.014 "traddr": "10.0.0.2", 00:08:48.014 "adrfam": "ipv4", 00:08:48.014 "trsvcid": "4420", 00:08:48.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.014 "hdgst": false, 00:08:48.014 "ddgst": false 00:08:48.014 }, 00:08:48.014 "method": "bdev_nvme_attach_controller" 00:08:48.014 }' 00:08:48.014 [2024-10-01 15:26:27.369386] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:48.014 [2024-10-01 15:26:27.369458] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:48.014 [2024-10-01 15:26:27.373459] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:48.014 [2024-10-01 15:26:27.373523] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:48.014 [2024-10-01 15:26:27.374210] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:48.014 [2024-10-01 15:26:27.374269] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:48.014 [2024-10-01 15:26:27.376262] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:48.014 [2024-10-01 15:26:27.376328] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:48.275 [2024-10-01 15:26:27.535925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.275 [2024-10-01 15:26:27.585216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.275 [2024-10-01 15:26:27.604219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.275 [2024-10-01 15:26:27.618405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:48.275 [2024-10-01 15:26:27.655298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.275 [2024-10-01 15:26:27.670886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.275 [2024-10-01 15:26:27.682519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:08:48.275 [2024-10-01 15:26:27.721672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.536 [2024-10-01 15:26:27.748400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:08:48.536 [2024-10-01 15:26:27.764576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.536 [2024-10-01 15:26:27.816415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.536 [2024-10-01 15:26:27.849620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:08:48.796 Running I/O for 1 seconds... 00:08:48.796 Running I/O for 1 seconds... 00:08:48.796 Running I/O for 1 seconds... 00:08:49.055 Running I/O for 1 seconds... 00:08:49.994 13999.00 IOPS, 54.68 MiB/s 00:08:49.994 Latency(us) 00:08:49.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.994 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:49.994 Nvme1n1 : 1.00 14068.37 54.95 0.00 0.00 9079.07 2785.28 18131.63 00:08:49.995 =================================================================================================================== 00:08:49.995 Total : 14068.37 54.95 0.00 0.00 9079.07 2785.28 18131.63 00:08:49.995 7937.00 IOPS, 31.00 MiB/s 00:08:49.995 Latency(us) 00:08:49.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.995 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:49.995 Nvme1n1 : 1.02 7946.72 31.04 0.00 0.00 16001.27 6444.37 25995.95 00:08:49.995 =================================================================================================================== 00:08:49.995 Total : 7946.72 31.04 0.00 0.00 16001.27 6444.37 25995.95 00:08:49.995 189240.00 IOPS, 739.22 MiB/s 00:08:49.995 Latency(us) 00:08:49.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.995 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:49.995 Nvme1n1 : 1.00 188860.84 737.74 0.00 0.00 674.07 307.20 1979.73 00:08:49.995 =================================================================================================================== 00:08:49.995 Total : 188860.84 737.74 0.00 0.00 674.07 307.20 1979.73 00:08:49.995 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2947656 00:08:49.995 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2947659 00:08:49.995 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2947663 00:08:49.995 8578.00 IOPS, 33.51 MiB/s 00:08:49.995 Latency(us) 00:08:49.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.995 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:49.995 Nvme1n1 : 1.01 8687.00 33.93 0.00 0.00 14693.36 3986.77 39103.15 00:08:49.995 =================================================================================================================== 00:08:49.995 Total : 8687.00 33.93 0.00 0.00 14693.36 3986.77 39103.15 00:08:50.253 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.253 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.253 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.253 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.253 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.254 rmmod nvme_tcp 00:08:50.254 rmmod nvme_fabrics 00:08:50.254 rmmod nvme_keyring 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 2947324 ']' 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 2947324 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2947324 ']' 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2947324 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2947324 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2947324' 00:08:50.254 killing process with pid 2947324 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2947324 00:08:50.254 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2947324 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.514 15:26:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.426 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:52.426 00:08:52.426 real 0m13.657s 00:08:52.426 user 0m20.930s 00:08:52.426 sys 0m7.828s 00:08:52.426 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.426 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.426 ************************************ 00:08:52.426 END TEST nvmf_bdev_io_wait 00:08:52.426 ************************************ 00:08:52.687 15:26:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:52.687 15:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:52.687 15:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.687 15:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.687 ************************************ 00:08:52.687 START TEST nvmf_queue_depth 00:08:52.687 ************************************ 00:08:52.687 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:52.687 * Looking for test storage... 00:08:52.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.687 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:52.687 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:52.687 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:52.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.950 --rc genhtml_branch_coverage=1 00:08:52.950 --rc genhtml_function_coverage=1 00:08:52.950 --rc genhtml_legend=1 00:08:52.950 --rc geninfo_all_blocks=1 00:08:52.950 --rc geninfo_unexecuted_blocks=1 00:08:52.950 00:08:52.950 ' 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:52.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.950 --rc genhtml_branch_coverage=1 00:08:52.950 --rc genhtml_function_coverage=1 00:08:52.950 --rc genhtml_legend=1 00:08:52.950 --rc geninfo_all_blocks=1 00:08:52.950 --rc geninfo_unexecuted_blocks=1 00:08:52.950 00:08:52.950 ' 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:52.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.950 --rc genhtml_branch_coverage=1 00:08:52.950 --rc genhtml_function_coverage=1 00:08:52.950 --rc genhtml_legend=1 00:08:52.950 --rc geninfo_all_blocks=1 00:08:52.950 --rc geninfo_unexecuted_blocks=1 00:08:52.950 00:08:52.950 ' 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:52.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.950 --rc genhtml_branch_coverage=1 00:08:52.950 --rc genhtml_function_coverage=1 00:08:52.950 --rc genhtml_legend=1 00:08:52.950 --rc geninfo_all_blocks=1 00:08:52.950 --rc geninfo_unexecuted_blocks=1 00:08:52.950 00:08:52.950 ' 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.950 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.951 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:01.092 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:01.092 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:01.092 Found net devices under 0000:31:00.0: cvl_0_0 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:01.092 Found net devices under 0000:31:00.1: cvl_0_1 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.092 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:09:01.093 00:09:01.093 --- 10.0.0.2 ping statistics --- 00:09:01.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.093 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:09:01.093 00:09:01.093 --- 10.0.0.1 ping statistics --- 00:09:01.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.093 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=2952449 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 2952449 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2952449 ']' 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.093 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.093 [2024-10-01 15:26:39.985258] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:01.093 [2024-10-01 15:26:39.985332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.093 [2024-10-01 15:26:40.031655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:01.093 [2024-10-01 15:26:40.080572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.093 [2024-10-01 15:26:40.127858] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.093 [2024-10-01 15:26:40.127919] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.093 [2024-10-01 15:26:40.127928] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.093 [2024-10-01 15:26:40.127935] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.093 [2024-10-01 15:26:40.127941] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.093 [2024-10-01 15:26:40.127969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.354 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.354 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:01.354 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:01.354 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.354 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.615 [2024-10-01 15:26:40.833804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.615 Malloc0 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.615 [2024-10-01 15:26:40.910824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2952580 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2952580 /var/tmp/bdevperf.sock 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2952580 ']' 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.615 15:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.615 [2024-10-01 15:26:40.968953] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:01.615 [2024-10-01 15:26:40.969020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952580 ] 00:09:01.615 [2024-10-01 15:26:41.003303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:01.615 [2024-10-01 15:26:41.050554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.876 [2024-10-01 15:26:41.097764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.446 15:26:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.447 15:26:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:02.447 15:26:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:02.447 15:26:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.447 15:26:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.447 NVMe0n1 00:09:02.447 15:26:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.447 15:26:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.707 Running I/O for 10 seconds... 00:09:12.818 8202.00 IOPS, 32.04 MiB/s 9640.50 IOPS, 37.66 MiB/s 10242.33 IOPS, 40.01 MiB/s 10753.75 IOPS, 42.01 MiB/s 11265.40 IOPS, 44.01 MiB/s 11563.17 IOPS, 45.17 MiB/s 11846.00 IOPS, 46.27 MiB/s 12032.38 IOPS, 47.00 MiB/s 12182.00 IOPS, 47.59 MiB/s 12348.40 IOPS, 48.24 MiB/s 00:09:12.818 Latency(us) 00:09:12.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.818 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:12.818 Verification LBA range: start 0x0 length 0x4000 00:09:12.818 NVMe0n1 : 10.06 12379.55 48.36 0.00 0.00 82403.56 19442.35 75584.85 00:09:12.818 =================================================================================================================== 00:09:12.818 Total : 12379.55 48.36 0.00 0.00 82403.56 19442.35 75584.85 00:09:12.818 { 00:09:12.818 "results": [ 00:09:12.818 { 00:09:12.818 "job": "NVMe0n1", 00:09:12.818 "core_mask": "0x1", 00:09:12.818 "workload": "verify", 00:09:12.818 "status": "finished", 00:09:12.818 "verify_range": { 00:09:12.818 "start": 0, 00:09:12.818 "length": 16384 00:09:12.818 }, 00:09:12.818 "queue_depth": 1024, 00:09:12.818 "io_size": 4096, 00:09:12.818 "runtime": 10.057553, 00:09:12.818 "iops": 12379.551964578262, 00:09:12.818 "mibps": 48.35762486163384, 00:09:12.818 "io_failed": 0, 00:09:12.818 "io_timeout": 0, 00:09:12.818 "avg_latency_us": 82403.55880875124, 00:09:12.818 "min_latency_us": 19442.346666666668, 00:09:12.818 "max_latency_us": 75584.85333333333 00:09:12.818 } 00:09:12.818 ], 00:09:12.818 "core_count": 1 00:09:12.818 } 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2952580 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2952580 ']' 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2952580 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2952580 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2952580' 00:09:12.818 killing process with pid 2952580 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2952580 00:09:12.818 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.818 00:09:12.818 Latency(us) 00:09:12.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.818 =================================================================================================================== 00:09:12.818 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2952580 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.818 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.818 rmmod nvme_tcp 00:09:13.078 rmmod nvme_fabrics 00:09:13.078 rmmod nvme_keyring 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 2952449 ']' 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 2952449 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2952449 ']' 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2952449 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2952449 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2952449' 00:09:13.078 killing process with pid 2952449 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2952449 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2952449 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.078 15:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.623 00:09:15.623 real 0m22.626s 00:09:15.623 user 0m25.596s 00:09:15.623 sys 0m7.263s 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.623 ************************************ 00:09:15.623 END TEST nvmf_queue_depth 00:09:15.623 ************************************ 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.623 ************************************ 00:09:15.623 START TEST nvmf_target_multipath 00:09:15.623 ************************************ 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:15.623 * Looking for test storage... 00:09:15.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.623 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.624 --rc genhtml_branch_coverage=1 00:09:15.624 --rc genhtml_function_coverage=1 00:09:15.624 --rc genhtml_legend=1 00:09:15.624 --rc geninfo_all_blocks=1 00:09:15.624 --rc geninfo_unexecuted_blocks=1 00:09:15.624 00:09:15.624 ' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.624 --rc genhtml_branch_coverage=1 00:09:15.624 --rc genhtml_function_coverage=1 00:09:15.624 --rc genhtml_legend=1 00:09:15.624 --rc geninfo_all_blocks=1 00:09:15.624 --rc geninfo_unexecuted_blocks=1 00:09:15.624 00:09:15.624 ' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.624 --rc genhtml_branch_coverage=1 00:09:15.624 --rc genhtml_function_coverage=1 00:09:15.624 --rc genhtml_legend=1 00:09:15.624 --rc geninfo_all_blocks=1 00:09:15.624 --rc geninfo_unexecuted_blocks=1 00:09:15.624 00:09:15.624 ' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.624 --rc genhtml_branch_coverage=1 00:09:15.624 --rc genhtml_function_coverage=1 00:09:15.624 --rc genhtml_legend=1 00:09:15.624 --rc geninfo_all_blocks=1 00:09:15.624 --rc geninfo_unexecuted_blocks=1 00:09:15.624 00:09:15.624 ' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:15.624 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.625 15:26:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:23.793 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:23.793 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:23.794 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:23.794 Found net devices under 0000:31:00.0: cvl_0_0 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:23.794 Found net devices under 0000:31:00.1: cvl_0_1 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:09:23.794 00:09:23.794 --- 10.0.0.2 ping statistics --- 00:09:23.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.794 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:09:23.794 00:09:23.794 --- 10.0.0.1 ping statistics --- 00:09:23.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.794 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:23.794 only one NIC for nvmf test 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.794 rmmod nvme_tcp 00:09:23.794 rmmod nvme_fabrics 00:09:23.794 rmmod nvme_keyring 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:23.794 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:23.795 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.795 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.795 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.795 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.795 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.709 00:09:25.709 real 0m10.132s 00:09:25.709 user 0m2.244s 00:09:25.709 sys 0m5.847s 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.709 ************************************ 00:09:25.709 END TEST nvmf_target_multipath 00:09:25.709 ************************************ 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.709 ************************************ 00:09:25.709 START TEST nvmf_zcopy 00:09:25.709 ************************************ 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.709 * Looking for test storage... 00:09:25.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:25.709 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.709 --rc genhtml_branch_coverage=1 00:09:25.709 --rc genhtml_function_coverage=1 00:09:25.709 --rc genhtml_legend=1 00:09:25.709 --rc geninfo_all_blocks=1 00:09:25.709 --rc geninfo_unexecuted_blocks=1 00:09:25.709 00:09:25.709 ' 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.709 --rc genhtml_branch_coverage=1 00:09:25.709 --rc genhtml_function_coverage=1 00:09:25.709 --rc genhtml_legend=1 00:09:25.709 --rc geninfo_all_blocks=1 00:09:25.709 --rc geninfo_unexecuted_blocks=1 00:09:25.709 00:09:25.709 ' 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.709 --rc genhtml_branch_coverage=1 00:09:25.709 --rc genhtml_function_coverage=1 00:09:25.709 --rc genhtml_legend=1 00:09:25.709 --rc geninfo_all_blocks=1 00:09:25.709 --rc geninfo_unexecuted_blocks=1 00:09:25.709 00:09:25.709 ' 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.709 --rc genhtml_branch_coverage=1 00:09:25.709 --rc genhtml_function_coverage=1 00:09:25.709 --rc genhtml_legend=1 00:09:25.709 --rc geninfo_all_blocks=1 00:09:25.709 --rc geninfo_unexecuted_blocks=1 00:09:25.709 00:09:25.709 ' 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.709 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.710 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:33.856 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:33.856 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:33.856 Found net devices under 0000:31:00.0: cvl_0_0 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:33.856 Found net devices under 0000:31:00.1: cvl_0_1 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.856 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:09:33.857 00:09:33.857 --- 10.0.0.2 ping statistics --- 00:09:33.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.857 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:09:33.857 00:09:33.857 --- 10.0.0.1 ping statistics --- 00:09:33.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.857 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=2964196 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 2964196 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2964196 ']' 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.857 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.857 [2024-10-01 15:27:12.964013] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:33.857 [2024-10-01 15:27:12.964080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.857 [2024-10-01 15:27:13.004215] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:33.857 [2024-10-01 15:27:13.051996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.857 [2024-10-01 15:27:13.098071] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.857 [2024-10-01 15:27:13.098127] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.857 [2024-10-01 15:27:13.098135] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.857 [2024-10-01 15:27:13.098142] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.857 [2024-10-01 15:27:13.098148] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.857 [2024-10-01 15:27:13.098170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.429 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.429 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.430 [2024-10-01 15:27:13.829835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.430 [2024-10-01 15:27:13.854122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.430 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.691 malloc0 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:34.691 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:34.691 { 00:09:34.691 "params": { 00:09:34.692 "name": "Nvme$subsystem", 00:09:34.692 "trtype": "$TEST_TRANSPORT", 00:09:34.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.692 "adrfam": "ipv4", 00:09:34.692 "trsvcid": "$NVMF_PORT", 00:09:34.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.692 "hdgst": ${hdgst:-false}, 00:09:34.692 "ddgst": ${ddgst:-false} 00:09:34.692 }, 00:09:34.692 "method": "bdev_nvme_attach_controller" 00:09:34.692 } 00:09:34.692 EOF 00:09:34.692 )") 00:09:34.692 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:34.692 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:34.692 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:34.692 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:34.692 "params": { 00:09:34.692 "name": "Nvme1", 00:09:34.692 "trtype": "tcp", 00:09:34.692 "traddr": "10.0.0.2", 00:09:34.692 "adrfam": "ipv4", 00:09:34.692 "trsvcid": "4420", 00:09:34.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.692 "hdgst": false, 00:09:34.692 "ddgst": false 00:09:34.692 }, 00:09:34.692 "method": "bdev_nvme_attach_controller" 00:09:34.692 }' 00:09:34.692 [2024-10-01 15:27:13.971772] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:34.692 [2024-10-01 15:27:13.971836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964247 ] 00:09:34.692 [2024-10-01 15:27:14.006437] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:34.692 [2024-10-01 15:27:14.055044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.692 [2024-10-01 15:27:14.101461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.953 Running I/O for 10 seconds... 00:09:45.252 6465.00 IOPS, 50.51 MiB/s 6520.00 IOPS, 50.94 MiB/s 6542.33 IOPS, 51.11 MiB/s 6785.75 IOPS, 53.01 MiB/s 7381.20 IOPS, 57.67 MiB/s 7770.67 IOPS, 60.71 MiB/s 8049.71 IOPS, 62.89 MiB/s 8259.38 IOPS, 64.53 MiB/s 8427.22 IOPS, 65.84 MiB/s 8558.80 IOPS, 66.87 MiB/s 00:09:45.252 Latency(us) 00:09:45.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:45.252 Verification LBA range: start 0x0 length 0x1000 00:09:45.252 Nvme1n1 : 10.01 8563.76 66.90 0.00 0.00 14902.03 665.60 28617.39 00:09:45.252 =================================================================================================================== 00:09:45.252 Total : 8563.76 66.90 0.00 0.00 14902.03 665.60 28617.39 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2966309 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:45.252 { 00:09:45.252 "params": { 00:09:45.252 "name": "Nvme$subsystem", 00:09:45.252 "trtype": "$TEST_TRANSPORT", 00:09:45.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.252 "adrfam": "ipv4", 00:09:45.252 "trsvcid": "$NVMF_PORT", 00:09:45.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.252 "hdgst": ${hdgst:-false}, 00:09:45.252 "ddgst": ${ddgst:-false} 00:09:45.252 }, 00:09:45.252 "method": "bdev_nvme_attach_controller" 00:09:45.252 } 00:09:45.252 EOF 00:09:45.252 )") 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:45.252 [2024-10-01 15:27:24.449343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.449373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:45.252 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:45.252 "params": { 00:09:45.252 "name": "Nvme1", 00:09:45.252 "trtype": "tcp", 00:09:45.252 "traddr": "10.0.0.2", 00:09:45.252 "adrfam": "ipv4", 00:09:45.252 "trsvcid": "4420", 00:09:45.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.252 "hdgst": false, 00:09:45.252 "ddgst": false 00:09:45.252 }, 00:09:45.252 "method": "bdev_nvme_attach_controller" 00:09:45.252 }' 00:09:45.252 [2024-10-01 15:27:24.461344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.461355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.473373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.473382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.485404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.485412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.494749] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:45.252 [2024-10-01 15:27:24.494799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966309 ] 00:09:45.252 [2024-10-01 15:27:24.497436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.497445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.509467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.509476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.521500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.521508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.525040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:45.252 [2024-10-01 15:27:24.533531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.533539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.545562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.545570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.557592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.557604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.569623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.569631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.571011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.252 [2024-10-01 15:27:24.581654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.581664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.593684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.593699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.599278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.252 [2024-10-01 15:27:24.605713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.605722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.617750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.617766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.629777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.629788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.641806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.641816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.653836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.653845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.665879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.665900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.677907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.677918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.689938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.689949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.252 [2024-10-01 15:27:24.701966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.252 [2024-10-01 15:27:24.701974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.713997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.714005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.726029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.726037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.738063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.738074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.750095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.750105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.762126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.762134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.774159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.774171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.786191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.786200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.798224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.798233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.810255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.810263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.822285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.822292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.834317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.834327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.846347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.846355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.513 [2024-10-01 15:27:24.858380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.513 [2024-10-01 15:27:24.858388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 [2024-10-01 15:27:24.870411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.870420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 [2024-10-01 15:27:24.882441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.882451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 [2024-10-01 15:27:24.894476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.894491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 Running I/O for 5 seconds... 00:09:45.514 [2024-10-01 15:27:24.909515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.909532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 [2024-10-01 15:27:24.922424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.922441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 [2024-10-01 15:27:24.935789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.935808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 [2024-10-01 15:27:24.948583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.948600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.514 [2024-10-01 15:27:24.962261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.514 [2024-10-01 15:27:24.962278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:24.975601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:24.975618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:24.988650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:24.988665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.002211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.002226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.014900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.014915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.028083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.028098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.040884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.040903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.054105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.054120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.067292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.067307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.080899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.080914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.774 [2024-10-01 15:27:25.094221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.774 [2024-10-01 15:27:25.094236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.106724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.106739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.119341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.119357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.132773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.132788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.145989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.146005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.159449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.159464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.172217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.172232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.184789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.184805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.197783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.197798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.211193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.211209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.775 [2024-10-01 15:27:25.224434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.775 [2024-10-01 15:27:25.224449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.237376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.237391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.251054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.251069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.263750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.263765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.276310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.276325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.289122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.289137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.301810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.301825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.314586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.314601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.327875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.327890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.341040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.341055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.354511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.354526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.367474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.367490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.380600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.380615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.394067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.394082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.406638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.406654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.419725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.419740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.432829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.432845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.445548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.445564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.458650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.458665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.471373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.471389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.035 [2024-10-01 15:27:25.484280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.035 [2024-10-01 15:27:25.484296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.496681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.496697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.509282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.509298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.521831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.521846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.534331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.534346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.547381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.547396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.559972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.559986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.572984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.572999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.586117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.586132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.599327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.599343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.611673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.611689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.624936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.624951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.637834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.637849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.651444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.651460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.664558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.664574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.677547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.677563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.690948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.690964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.704074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.704089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.716551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.716567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.728774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.728789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.295 [2024-10-01 15:27:25.741322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.295 [2024-10-01 15:27:25.741338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.754942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.754958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.768264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.768280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.781448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.781463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.794402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.794417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.806996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.807012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.819417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.819432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.832784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.832799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.846337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.846353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.858786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.858801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.871879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.871903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.885064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.885079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.898583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.898598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 19075.00 IOPS, 149.02 MiB/s [2024-10-01 15:27:25.911369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.911385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.925163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.555 [2024-10-01 15:27:25.925179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.555 [2024-10-01 15:27:25.937992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.556 [2024-10-01 15:27:25.938007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.556 [2024-10-01 15:27:25.951142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.556 [2024-10-01 15:27:25.951158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.556 [2024-10-01 15:27:25.963877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.556 [2024-10-01 15:27:25.963898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.556 [2024-10-01 15:27:25.976952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.556 [2024-10-01 15:27:25.976968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.556 [2024-10-01 15:27:25.989555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.556 [2024-10-01 15:27:25.989574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.556 [2024-10-01 15:27:26.002854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.556 [2024-10-01 15:27:26.002870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.015381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.015397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.028123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.028139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.041431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.041447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.054730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.054746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.067918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.067933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.081229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.081245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.094038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.094054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.106830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.106847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.119638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.119654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.132052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.132068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.144727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.144743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.157195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.157211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.170569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.170584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.183146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.183162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.196642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.196657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.209910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.209926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.223634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.223650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.236949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.236968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.250390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.250406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-10-01 15:27:26.263167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-10-01 15:27:26.263183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.276751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.276766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.290451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.290467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.303726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.303742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.317378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.317394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.330938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.330954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.344483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.344499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.357671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.357687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.370804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.370820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.384302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.384318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.397708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.397724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.411135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.411151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.424496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.424512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.437844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.437860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.451552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.451568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.464340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.464357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.477101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.477117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.490383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.490402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.503662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.503678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.515950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.515966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.078 [2024-10-01 15:27:26.529329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.078 [2024-10-01 15:27:26.529344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.542121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.542137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.554682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.554698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.568123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.568139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.581215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.581231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.594304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.594320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.607959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.607975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.620657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.620672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.634374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.634390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.647836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.647852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.661512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.661528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.674939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.674954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.688784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.688800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.701874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.701890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.714400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.714415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.727583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.727598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.740599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.740614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.753851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.753866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.767703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.767718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.780353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.780369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.339 [2024-10-01 15:27:26.793361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.339 [2024-10-01 15:27:26.793376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-10-01 15:27:26.806523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.599 [2024-10-01 15:27:26.806539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-10-01 15:27:26.819251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.599 [2024-10-01 15:27:26.819267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-10-01 15:27:26.832780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.599 [2024-10-01 15:27:26.832795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-10-01 15:27:26.846235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.846250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.859207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.859223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.872502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.872517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.885672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.885688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.898500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.898515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 19166.50 IOPS, 149.74 MiB/s [2024-10-01 15:27:26.911453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.911468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.924491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.924506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.937871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.937886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.951261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.951276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.964556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.964571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.977791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.977806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:26.991273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:26.991288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:27.004695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:27.004710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:27.018655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:27.018670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:27.032076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:27.032091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-10-01 15:27:27.044900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-10-01 15:27:27.044916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.057313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.057329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.070046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.070060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.083564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.083579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.097157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.097172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.110449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.110464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.123602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.123617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.136655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.136670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.149105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.149120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.162432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.162447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.175192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.175207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.188753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.188768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.202000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.202015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.215035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.215050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.228048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.228067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.241119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.241134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.254413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.254428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.267258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.267274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.280953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.280968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.293318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.293333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.860 [2024-10-01 15:27:27.305964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.860 [2024-10-01 15:27:27.305980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.318553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.318568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.332051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.332067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.345545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.345561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.358583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.358598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.371556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.371571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.384087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.384103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.396184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.396199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.408839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.408854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.422153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.422168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.435192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-10-01 15:27:27.435207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-10-01 15:27:27.448502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.448518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.462066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.462081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.474722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.474741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.488144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.488159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.501621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.501636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.513966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.513982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.526601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.526616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.539290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.539305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.552609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.552624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-10-01 15:27:27.565205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-10-01 15:27:27.565221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.578909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.578925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.592366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.592382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.605199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.605215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.617983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.617999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.631185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.631201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.644738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.644754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.658253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.658270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.670690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.670706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.684032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.684048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.696423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.696440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.709575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.709591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.722636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.722657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.735978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.735995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-10-01 15:27:27.749188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-10-01 15:27:27.749205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-10-01 15:27:27.762513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-10-01 15:27:27.762529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-10-01 15:27:27.775913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-10-01 15:27:27.775929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-10-01 15:27:27.789473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-10-01 15:27:27.789489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-10-01 15:27:27.803003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-10-01 15:27:27.803019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-10-01 15:27:27.815654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-10-01 15:27:27.815670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-10-01 15:27:27.828766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-10-01 15:27:27.828782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.841815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.841830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.855214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.855229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.868680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.868696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.881703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.881718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.894206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.894221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 19209.67 IOPS, 150.08 MiB/s [2024-10-01 15:27:27.906461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.906476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.919927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.919943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.933364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.933379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.946056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.946071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.958637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.958653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.971148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.971164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.984433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.984448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:27.996810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:27.996826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:28.009617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:28.009633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:28.022935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:28.022950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:28.036294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:28.036310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:28.049565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:28.049581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:28.062583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:28.062599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:28.075110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:28.075126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.642 [2024-10-01 15:27:28.088354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.642 [2024-10-01 15:27:28.088369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.102086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-10-01 15:27:28.102102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.114971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-10-01 15:27:28.114986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.127993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-10-01 15:27:28.128008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.140649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-10-01 15:27:28.140664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.153526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-10-01 15:27:28.153541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.165959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-10-01 15:27:28.165974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.179057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-10-01 15:27:28.179073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-10-01 15:27:28.191400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.191416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.204649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.204664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.217753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.217769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.231261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.231277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.244022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.244039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.256441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.256457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.269152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.269168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.282852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.282867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.295819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.295835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.308573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.308589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.321156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.321172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.334807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.334823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-10-01 15:27:28.348116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-10-01 15:27:28.348132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.361388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.361404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.374855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.374871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.388168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.388184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.401234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.401250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.414125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.414140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.427758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.427773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.441276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.441291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.454710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.454725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.467222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.467238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.480666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.480682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.494216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.494231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.506808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.506823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.519597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.519613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.532251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.532266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.545927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.545943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.558566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.558581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.571741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.571756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.585410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.585425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.598905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.598921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.611821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.611837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.230 [2024-10-01 15:27:28.624784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.230 [2024-10-01 15:27:28.624799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.638333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.638349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.651752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.651767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.664527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.664542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.677482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.677497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.690889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.690910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.704058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.704073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.717057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.717072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.729527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.729542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.742833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.742849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.755911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.755927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.769021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.769037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.781849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-10-01 15:27:28.781864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-10-01 15:27:28.794433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.794448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.807527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.807542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.821030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.821046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.833786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.833801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.846217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.846233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.859552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.859567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.873000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.873015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.886400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.886416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.899324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.899339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 19233.00 IOPS, 150.26 MiB/s [2024-10-01 15:27:28.912711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.912726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.926319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.926335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.938760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.938774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.509 [2024-10-01 15:27:28.952424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.509 [2024-10-01 15:27:28.952443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:28.965920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:28.965936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:28.979509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:28.979525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:28.992787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:28.992803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.006169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.006184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.019593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.019608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.032309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.032324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.045051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.045066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.057791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.057806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.070788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.070803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.084106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.084122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.097379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.097395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.110786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.110802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.124008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.124024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.136676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.136691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.818 [2024-10-01 15:27:29.149859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.818 [2024-10-01 15:27:29.149874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.162275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.162289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.175629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.175644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.188324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.188339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.201216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.201237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.214417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.214433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.227165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.227181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.240478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.240493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.819 [2024-10-01 15:27:29.253460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.819 [2024-10-01 15:27:29.253476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.266614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.266630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.279951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.279967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.293020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.293035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.306081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.306096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.318864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.318879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.332149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.332164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.345460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.345476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.358636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.358652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.372051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.372067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.385374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.385390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.399022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.399038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.412241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.412257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.425181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.425197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.438691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.438707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.452066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.452086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.465361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.465377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.478665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.478681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.492449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.492465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.504815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.504831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.518043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.518059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.531659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.531675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.545546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.545562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.112 [2024-10-01 15:27:29.559049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.112 [2024-10-01 15:27:29.559065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.572560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.572577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.585063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.585079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.598543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.598559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.611868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.611883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.625263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.625278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.638059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.638075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.651185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.651201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.664224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.664239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.677109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.677124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.689799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.689815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.702499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.702515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.715156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.715172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.727731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.727747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.740425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.740440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.413 [2024-10-01 15:27:29.753908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.413 [2024-10-01 15:27:29.753924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.414 [2024-10-01 15:27:29.767268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.414 [2024-10-01 15:27:29.767284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.414 [2024-10-01 15:27:29.780636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.414 [2024-10-01 15:27:29.780652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.414 [2024-10-01 15:27:29.794093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.414 [2024-10-01 15:27:29.794110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.414 [2024-10-01 15:27:29.807501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.414 [2024-10-01 15:27:29.807516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.414 [2024-10-01 15:27:29.820803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.414 [2024-10-01 15:27:29.820818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.414 [2024-10-01 15:27:29.834289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.414 [2024-10-01 15:27:29.834306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.847382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.847399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.860953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.860969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.873825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.873841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.887435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.887451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.900519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.900535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 19245.20 IOPS, 150.35 MiB/s [2024-10-01 15:27:29.913332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.913349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 00:09:50.762 Latency(us) 00:09:50.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.762 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:50.762 Nvme1n1 : 5.01 19247.89 150.37 0.00 0.00 6644.03 3099.31 17257.81 00:09:50.762 =================================================================================================================== 00:09:50.762 Total : 19247.89 150.37 0.00 0.00 6644.03 3099.31 17257.81 00:09:50.762 [2024-10-01 15:27:29.922690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.922704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.934724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.934738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.946753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.946768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.958784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.958797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.970809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.970819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.982837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.982847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:29.994869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:29.994880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:30.007094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:30.007111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:30.019119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:30.019133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 [2024-10-01 15:27:30.031143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.762 [2024-10-01 15:27:30.031152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2966309) - No such process 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2966309 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 delay0 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.762 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:50.762 [2024-10-01 15:27:30.147320] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:59.126 Initializing NVMe Controllers 00:09:59.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.126 Initialization complete. Launching workers. 00:09:59.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 34987 00:09:59.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 35098, failed to submit 126 00:09:59.126 success 35034, unsuccessful 64, failed 0 00:09:59.126 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.127 rmmod nvme_tcp 00:09:59.127 rmmod nvme_fabrics 00:09:59.127 rmmod nvme_keyring 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 2964196 ']' 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 2964196 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2964196 ']' 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2964196 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2964196 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2964196' 00:09:59.127 killing process with pid 2964196 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2964196 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2964196 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.127 15:27:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.512 00:10:00.512 real 0m34.694s 00:10:00.512 user 0m45.193s 00:10:00.512 sys 0m12.163s 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.512 ************************************ 00:10:00.512 END TEST nvmf_zcopy 00:10:00.512 ************************************ 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.512 ************************************ 00:10:00.512 START TEST nvmf_nmic 00:10:00.512 ************************************ 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.512 * Looking for test storage... 00:10:00.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:00.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.512 --rc genhtml_branch_coverage=1 00:10:00.512 --rc genhtml_function_coverage=1 00:10:00.512 --rc genhtml_legend=1 00:10:00.512 --rc geninfo_all_blocks=1 00:10:00.512 --rc geninfo_unexecuted_blocks=1 00:10:00.512 00:10:00.512 ' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:00.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.512 --rc genhtml_branch_coverage=1 00:10:00.512 --rc genhtml_function_coverage=1 00:10:00.512 --rc genhtml_legend=1 00:10:00.512 --rc geninfo_all_blocks=1 00:10:00.512 --rc geninfo_unexecuted_blocks=1 00:10:00.512 00:10:00.512 ' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:00.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.512 --rc genhtml_branch_coverage=1 00:10:00.512 --rc genhtml_function_coverage=1 00:10:00.512 --rc genhtml_legend=1 00:10:00.512 --rc geninfo_all_blocks=1 00:10:00.512 --rc geninfo_unexecuted_blocks=1 00:10:00.512 00:10:00.512 ' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:00.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.512 --rc genhtml_branch_coverage=1 00:10:00.512 --rc genhtml_function_coverage=1 00:10:00.512 --rc genhtml_legend=1 00:10:00.512 --rc geninfo_all_blocks=1 00:10:00.512 --rc geninfo_unexecuted_blocks=1 00:10:00.512 00:10:00.512 ' 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.512 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.513 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:08.652 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:08.652 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.652 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:08.653 Found net devices under 0000:31:00.0: cvl_0_0 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:08.653 Found net devices under 0000:31:00.1: cvl_0_1 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:10:08.653 00:10:08.653 --- 10.0.0.2 ping statistics --- 00:10:08.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.653 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:10:08.653 00:10:08.653 --- 10.0.0.1 ping statistics --- 00:10:08.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.653 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=2973349 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 2973349 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2973349 ']' 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.653 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.653 [2024-10-01 15:27:47.619818] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:10:08.653 [2024-10-01 15:27:47.619889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.653 [2024-10-01 15:27:47.660769] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:08.653 [2024-10-01 15:27:47.708135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.653 [2024-10-01 15:27:47.756918] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.653 [2024-10-01 15:27:47.756973] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.653 [2024-10-01 15:27:47.756981] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.653 [2024-10-01 15:27:47.756988] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.653 [2024-10-01 15:27:47.756995] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.653 [2024-10-01 15:27:47.757079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.653 [2024-10-01 15:27:47.757242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.653 [2024-10-01 15:27:47.757397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.653 [2024-10-01 15:27:47.757397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 [2024-10-01 15:27:48.497307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 Malloc0 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 [2024-10-01 15:27:48.563063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:09.226 test case1: single bdev can't be used in multiple subsystems 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.226 [2024-10-01 15:27:48.598914] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:09.226 [2024-10-01 15:27:48.598950] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:09.226 [2024-10-01 15:27:48.598959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.226 request: 00:10:09.226 { 00:10:09.226 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:09.226 "namespace": { 00:10:09.226 "bdev_name": "Malloc0", 00:10:09.226 "no_auto_visible": false 00:10:09.226 }, 00:10:09.226 "method": "nvmf_subsystem_add_ns", 00:10:09.226 "req_id": 1 00:10:09.226 } 00:10:09.226 Got JSON-RPC error response 00:10:09.226 response: 00:10:09.226 { 00:10:09.226 "code": -32602, 00:10:09.226 "message": "Invalid parameters" 00:10:09.226 } 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:09.226 Adding namespace failed - expected result. 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:09.226 test case2: host connect to nvmf target in multiple paths 00:10:09.226 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:09.227 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.227 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.227 [2024-10-01 15:27:48.611118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:09.227 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.227 15:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.139 15:27:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:12.564 15:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.564 15:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.564 15:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.564 15:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:12.564 15:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.494 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.494 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.494 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.494 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.494 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.494 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:14.494 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:14.494 [global] 00:10:14.494 thread=1 00:10:14.494 invalidate=1 00:10:14.494 rw=write 00:10:14.494 time_based=1 00:10:14.494 runtime=1 00:10:14.494 ioengine=libaio 00:10:14.494 direct=1 00:10:14.494 bs=4096 00:10:14.494 iodepth=1 00:10:14.494 norandommap=0 00:10:14.494 numjobs=1 00:10:14.494 00:10:14.494 verify_dump=1 00:10:14.494 verify_backlog=512 00:10:14.494 verify_state_save=0 00:10:14.494 do_verify=1 00:10:14.494 verify=crc32c-intel 00:10:14.494 [job0] 00:10:14.494 filename=/dev/nvme0n1 00:10:14.494 Could not set queue depth (nvme0n1) 00:10:14.756 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.756 fio-3.35 00:10:14.756 Starting 1 thread 00:10:16.134 00:10:16.134 job0: (groupid=0, jobs=1): err= 0: pid=2974705: Tue Oct 1 15:27:55 2024 00:10:16.134 read: IOPS=244, BW=976KiB/s (1000kB/s)(988KiB/1012msec) 00:10:16.134 slat (nsec): min=6539, max=55034, avg=24879.11, stdev=7174.56 00:10:16.134 clat (usec): min=478, max=41986, avg=2762.07, stdev=8422.81 00:10:16.134 lat (usec): min=486, max=42013, avg=2786.95, stdev=8423.33 00:10:16.134 clat percentiles (usec): 00:10:16.134 | 1.00th=[ 502], 5.00th=[ 725], 10.00th=[ 799], 20.00th=[ 906], 00:10:16.134 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:10:16.134 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1156], 00:10:16.134 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:16.134 | 99.99th=[42206] 00:10:16.134 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:16.134 slat (usec): min=9, max=25306, avg=78.84, stdev=1117.15 00:10:16.134 clat (usec): min=247, max=748, avg=542.89, stdev=105.37 00:10:16.134 lat (usec): min=257, max=26031, avg=621.73, stdev=1130.76 00:10:16.134 clat percentiles (usec): 00:10:16.134 | 1.00th=[ 281], 5.00th=[ 355], 10.00th=[ 375], 20.00th=[ 453], 00:10:16.134 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 570], 00:10:16.134 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 701], 00:10:16.134 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 750], 99.95th=[ 750], 00:10:16.134 | 99.99th=[ 750] 00:10:16.134 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:16.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:16.134 lat (usec) : 250=0.26%, 500=20.42%, 750=48.62%, 1000=18.05% 00:10:16.134 lat (msec) : 2=11.20%, 50=1.45% 00:10:16.134 cpu : usr=1.68%, sys=2.47%, ctx=761, majf=0, minf=1 00:10:16.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.134 issued rwts: total=247,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.134 00:10:16.134 Run status group 0 (all jobs): 00:10:16.134 READ: bw=976KiB/s (1000kB/s), 976KiB/s-976KiB/s (1000kB/s-1000kB/s), io=988KiB (1012kB), run=1012-1012msec 00:10:16.134 WRITE: bw=2024KiB/s (2072kB/s), 2024KiB/s-2024KiB/s (2072kB/s-2072kB/s), io=2048KiB (2097kB), run=1012-1012msec 00:10:16.134 00:10:16.134 Disk stats (read/write): 00:10:16.134 nvme0n1: ios=199/512, merge=0/0, ticks=1560/230, in_queue=1790, util=98.70% 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.134 rmmod nvme_tcp 00:10:16.134 rmmod nvme_fabrics 00:10:16.134 rmmod nvme_keyring 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 2973349 ']' 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 2973349 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2973349 ']' 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2973349 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2973349 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2973349' 00:10:16.134 killing process with pid 2973349 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2973349 00:10:16.134 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2973349 00:10:16.394 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.395 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.303 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.303 00:10:18.303 real 0m18.067s 00:10:18.303 user 0m47.698s 00:10:18.303 sys 0m6.753s 00:10:18.303 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.303 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.303 ************************************ 00:10:18.303 END TEST nvmf_nmic 00:10:18.303 ************************************ 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.563 ************************************ 00:10:18.563 START TEST nvmf_fio_target 00:10:18.563 ************************************ 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:18.563 * Looking for test storage... 00:10:18.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:18.563 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.564 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:18.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.564 --rc genhtml_branch_coverage=1 00:10:18.564 --rc genhtml_function_coverage=1 00:10:18.564 --rc genhtml_legend=1 00:10:18.564 --rc geninfo_all_blocks=1 00:10:18.564 --rc geninfo_unexecuted_blocks=1 00:10:18.564 00:10:18.564 ' 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:18.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.564 --rc genhtml_branch_coverage=1 00:10:18.564 --rc genhtml_function_coverage=1 00:10:18.564 --rc genhtml_legend=1 00:10:18.564 --rc geninfo_all_blocks=1 00:10:18.564 --rc geninfo_unexecuted_blocks=1 00:10:18.564 00:10:18.564 ' 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:18.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.564 --rc genhtml_branch_coverage=1 00:10:18.564 --rc genhtml_function_coverage=1 00:10:18.564 --rc genhtml_legend=1 00:10:18.564 --rc geninfo_all_blocks=1 00:10:18.564 --rc geninfo_unexecuted_blocks=1 00:10:18.564 00:10:18.564 ' 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:18.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.564 --rc genhtml_branch_coverage=1 00:10:18.564 --rc genhtml_function_coverage=1 00:10:18.564 --rc genhtml_legend=1 00:10:18.564 --rc geninfo_all_blocks=1 00:10:18.564 --rc geninfo_unexecuted_blocks=1 00:10:18.564 00:10:18.564 ' 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.564 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.825 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:26.969 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:26.969 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:26.969 Found net devices under 0000:31:00.0: cvl_0_0 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:26.969 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:26.970 Found net devices under 0000:31:00.1: cvl_0_1 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:10:26.970 00:10:26.970 --- 10.0.0.2 ping statistics --- 00:10:26.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.970 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:10:26.970 00:10:26.970 --- 10.0.0.1 ping statistics --- 00:10:26.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.970 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=2979308 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 2979308 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2979308 ']' 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.970 15:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.970 [2024-10-01 15:28:05.651196] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:10:26.970 [2024-10-01 15:28:05.651274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.970 [2024-10-01 15:28:05.693538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:26.970 [2024-10-01 15:28:05.743639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.970 [2024-10-01 15:28:05.791355] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.970 [2024-10-01 15:28:05.791406] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.970 [2024-10-01 15:28:05.791414] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.970 [2024-10-01 15:28:05.791421] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.970 [2024-10-01 15:28:05.791427] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.970 [2024-10-01 15:28:05.791590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.970 [2024-10-01 15:28:05.791749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.970 [2024-10-01 15:28:05.791931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.970 [2024-10-01 15:28:05.791935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.231 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.231 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:27.231 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:27.231 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:27.231 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.231 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.231 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.231 [2024-10-01 15:28:06.655397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.492 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.492 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:27.492 15:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.752 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:27.752 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.014 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:28.014 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.014 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:28.014 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:28.274 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.535 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:28.535 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.796 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:28.796 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.796 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:28.796 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:29.057 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:29.318 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:29.318 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:29.318 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:29.318 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:29.579 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.838 [2024-10-01 15:28:09.089938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.838 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:30.098 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:30.098 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.008 15:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:32.008 15:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:32.008 15:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.008 15:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:32.008 15:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:32.008 15:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.949 15:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.949 15:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.949 15:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.949 15:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:33.949 15:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.949 15:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:33.949 15:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.950 [global] 00:10:33.950 thread=1 00:10:33.950 invalidate=1 00:10:33.950 rw=write 00:10:33.950 time_based=1 00:10:33.950 runtime=1 00:10:33.950 ioengine=libaio 00:10:33.950 direct=1 00:10:33.950 bs=4096 00:10:33.950 iodepth=1 00:10:33.950 norandommap=0 00:10:33.950 numjobs=1 00:10:33.950 00:10:33.950 verify_dump=1 00:10:33.950 verify_backlog=512 00:10:33.950 verify_state_save=0 00:10:33.950 do_verify=1 00:10:33.950 verify=crc32c-intel 00:10:33.950 [job0] 00:10:33.950 filename=/dev/nvme0n1 00:10:33.950 [job1] 00:10:33.950 filename=/dev/nvme0n2 00:10:33.950 [job2] 00:10:33.950 filename=/dev/nvme0n3 00:10:33.950 [job3] 00:10:33.950 filename=/dev/nvme0n4 00:10:33.950 Could not set queue depth (nvme0n1) 00:10:33.950 Could not set queue depth (nvme0n2) 00:10:33.950 Could not set queue depth (nvme0n3) 00:10:33.950 Could not set queue depth (nvme0n4) 00:10:34.216 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.216 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.216 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.216 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.216 fio-3.35 00:10:34.216 Starting 4 threads 00:10:35.639 00:10:35.639 job0: (groupid=0, jobs=1): err= 0: pid=2981226: Tue Oct 1 15:28:14 2024 00:10:35.639 read: IOPS=24, BW=99.0KiB/s (101kB/s)(100KiB/1010msec) 00:10:35.639 slat (nsec): min=11306, max=30632, avg=26612.56, stdev=3269.64 00:10:35.639 clat (usec): min=747, max=41854, avg=30852.47, stdev=17420.73 00:10:35.639 lat (usec): min=778, max=41881, avg=30879.08, stdev=17420.13 00:10:35.639 clat percentiles (usec): 00:10:35.639 | 1.00th=[ 750], 5.00th=[ 758], 10.00th=[ 775], 20.00th=[ 791], 00:10:35.639 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:10:35.639 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:35.639 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:35.639 | 99.99th=[41681] 00:10:35.639 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:35.639 slat (nsec): min=9937, max=67472, avg=30179.94, stdev=10920.93 00:10:35.639 clat (usec): min=181, max=613, avg=427.81, stdev=91.65 00:10:35.639 lat (usec): min=215, max=648, avg=457.99, stdev=97.35 00:10:35.639 clat percentiles (usec): 00:10:35.639 | 1.00th=[ 260], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 334], 00:10:35.639 | 30.00th=[ 355], 40.00th=[ 383], 50.00th=[ 441], 60.00th=[ 478], 00:10:35.639 | 70.00th=[ 498], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 562], 00:10:35.639 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 611], 99.95th=[ 611], 00:10:35.639 | 99.99th=[ 611] 00:10:35.639 bw ( KiB/s): min= 4096, max= 4096, per=41.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:35.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:35.639 lat (usec) : 250=0.93%, 500=66.48%, 750=28.12%, 1000=0.93% 00:10:35.639 lat (msec) : 50=3.54% 00:10:35.639 cpu : usr=1.09%, sys=1.19%, ctx=538, majf=0, minf=1 00:10:35.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.639 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.639 job1: (groupid=0, jobs=1): err= 0: pid=2981227: Tue Oct 1 15:28:14 2024 00:10:35.639 read: IOPS=271, BW=1087KiB/s (1113kB/s)(1088KiB/1001msec) 00:10:35.639 slat (nsec): min=6938, max=44945, avg=21423.20, stdev=9347.25 00:10:35.639 clat (usec): min=219, max=42564, avg=3026.21, stdev=9701.63 00:10:35.639 lat (usec): min=247, max=42574, avg=3047.63, stdev=9702.22 00:10:35.639 clat percentiles (usec): 00:10:35.639 | 1.00th=[ 277], 5.00th=[ 334], 10.00th=[ 379], 20.00th=[ 441], 00:10:35.639 | 30.00th=[ 490], 40.00th=[ 523], 50.00th=[ 545], 60.00th=[ 586], 00:10:35.639 | 70.00th=[ 627], 80.00th=[ 734], 90.00th=[ 873], 95.00th=[41681], 00:10:35.639 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:35.639 | 99.99th=[42730] 00:10:35.639 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:35.639 slat (nsec): min=6043, max=55084, avg=22430.05, stdev=11363.82 00:10:35.639 clat (usec): min=101, max=536, avg=302.59, stdev=76.56 00:10:35.639 lat (usec): min=112, max=545, avg=325.02, stdev=74.05 00:10:35.639 clat percentiles (usec): 00:10:35.639 | 1.00th=[ 116], 5.00th=[ 141], 10.00th=[ 204], 20.00th=[ 251], 00:10:35.639 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 322], 00:10:35.639 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 424], 00:10:35.639 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 537], 99.95th=[ 537], 00:10:35.639 | 99.99th=[ 537] 00:10:35.639 bw ( KiB/s): min= 4096, max= 4096, per=41.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:35.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:35.639 lat (usec) : 250=13.14%, 500=63.39%, 750=16.58%, 1000=4.46% 00:10:35.639 lat (msec) : 2=0.13%, 10=0.26%, 50=2.04% 00:10:35.639 cpu : usr=1.10%, sys=1.50%, ctx=786, majf=0, minf=1 00:10:35.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.639 issued rwts: total=272,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.639 job2: (groupid=0, jobs=1): err= 0: pid=2981228: Tue Oct 1 15:28:14 2024 00:10:35.639 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:35.639 slat (nsec): min=7426, max=58054, avg=28341.11, stdev=4415.99 00:10:35.639 clat (usec): min=265, max=42563, avg=1146.41, stdev=3125.96 00:10:35.639 lat (usec): min=293, max=42591, avg=1174.76, stdev=3124.93 00:10:35.639 clat percentiles (usec): 00:10:35.639 | 1.00th=[ 553], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 848], 00:10:35.639 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[ 922], 60.00th=[ 938], 00:10:35.639 | 70.00th=[ 955], 80.00th=[ 963], 90.00th=[ 996], 95.00th=[ 1020], 00:10:35.639 | 99.00th=[ 1172], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:10:35.639 | 99.99th=[42730] 00:10:35.639 write: IOPS=980, BW=3920KiB/s (4014kB/s)(3924KiB/1001msec); 0 zone resets 00:10:35.639 slat (usec): min=5, max=12214, avg=35.62, stdev=390.12 00:10:35.639 clat (usec): min=100, max=816, avg=360.37, stdev=101.98 00:10:35.639 lat (usec): min=111, max=12697, avg=395.99, stdev=408.11 00:10:35.639 clat percentiles (usec): 00:10:35.639 | 1.00th=[ 123], 5.00th=[ 219], 10.00th=[ 251], 20.00th=[ 281], 00:10:35.639 | 30.00th=[ 297], 40.00th=[ 322], 50.00th=[ 347], 60.00th=[ 375], 00:10:35.639 | 70.00th=[ 420], 80.00th=[ 457], 90.00th=[ 494], 95.00th=[ 529], 00:10:35.639 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 816], 99.95th=[ 816], 00:10:35.639 | 99.99th=[ 816] 00:10:35.639 bw ( KiB/s): min= 4096, max= 4096, per=41.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:35.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:35.639 lat (usec) : 250=6.56%, 500=53.52%, 750=7.84%, 1000=29.54% 00:10:35.639 lat (msec) : 2=2.28%, 4=0.07%, 50=0.20% 00:10:35.639 cpu : usr=3.00%, sys=3.60%, ctx=1496, majf=0, minf=1 00:10:35.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.639 issued rwts: total=512,981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.639 job3: (groupid=0, jobs=1): err= 0: pid=2981229: Tue Oct 1 15:28:14 2024 00:10:35.639 read: IOPS=221, BW=887KiB/s (908kB/s)(888KiB/1001msec) 00:10:35.639 slat (nsec): min=6928, max=43737, avg=26943.28, stdev=5302.72 00:10:35.639 clat (usec): min=609, max=41924, avg=3389.57, stdev=9500.55 00:10:35.639 lat (usec): min=636, max=41952, avg=3416.52, stdev=9500.31 00:10:35.639 clat percentiles (usec): 00:10:35.639 | 1.00th=[ 676], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 848], 00:10:35.639 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 971], 00:10:35.639 | 70.00th=[ 988], 80.00th=[ 1020], 90.00th=[ 1106], 95.00th=[41157], 00:10:35.639 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:35.639 | 99.99th=[41681] 00:10:35.640 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:35.640 slat (nsec): min=10184, max=52969, avg=28896.28, stdev=11635.60 00:10:35.640 clat (usec): min=159, max=792, avg=432.81, stdev=104.48 00:10:35.640 lat (usec): min=199, max=828, avg=461.71, stdev=111.00 00:10:35.640 clat percentiles (usec): 00:10:35.640 | 1.00th=[ 227], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 326], 00:10:35.640 | 30.00th=[ 355], 40.00th=[ 396], 50.00th=[ 449], 60.00th=[ 478], 00:10:35.640 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 586], 00:10:35.640 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 791], 99.95th=[ 791], 00:10:35.640 | 99.99th=[ 791] 00:10:35.640 bw ( KiB/s): min= 4096, max= 4096, per=41.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:35.640 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:35.640 lat (usec) : 250=1.50%, 500=46.19%, 750=23.16%, 1000=22.07% 00:10:35.640 lat (msec) : 2=5.04%, 10=0.14%, 20=0.14%, 50=1.77% 00:10:35.640 cpu : usr=0.70%, sys=2.70%, ctx=735, majf=0, minf=1 00:10:35.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.640 issued rwts: total=222,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.640 00:10:35.640 Run status group 0 (all jobs): 00:10:35.640 READ: bw=4083KiB/s (4181kB/s), 99.0KiB/s-2046KiB/s (101kB/s-2095kB/s), io=4124KiB (4223kB), run=1001-1010msec 00:10:35.640 WRITE: bw=9968KiB/s (10.2MB/s), 2028KiB/s-3920KiB/s (2076kB/s-4014kB/s), io=9.83MiB (10.3MB), run=1001-1010msec 00:10:35.640 00:10:35.640 Disk stats (read/write): 00:10:35.640 nvme0n1: ios=56/512, merge=0/0, ticks=625/217, in_queue=842, util=86.67% 00:10:35.640 nvme0n2: ios=121/512, merge=0/0, ticks=1373/150, in_queue=1523, util=87.74% 00:10:35.640 nvme0n3: ios=536/541, merge=0/0, ticks=1426/180, in_queue=1606, util=95.13% 00:10:35.640 nvme0n4: ios=80/512, merge=0/0, ticks=1473/216, in_queue=1689, util=94.21% 00:10:35.640 15:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:35.640 [global] 00:10:35.640 thread=1 00:10:35.640 invalidate=1 00:10:35.640 rw=randwrite 00:10:35.640 time_based=1 00:10:35.640 runtime=1 00:10:35.640 ioengine=libaio 00:10:35.640 direct=1 00:10:35.640 bs=4096 00:10:35.640 iodepth=1 00:10:35.640 norandommap=0 00:10:35.640 numjobs=1 00:10:35.640 00:10:35.640 verify_dump=1 00:10:35.640 verify_backlog=512 00:10:35.640 verify_state_save=0 00:10:35.640 do_verify=1 00:10:35.640 verify=crc32c-intel 00:10:35.640 [job0] 00:10:35.640 filename=/dev/nvme0n1 00:10:35.640 [job1] 00:10:35.640 filename=/dev/nvme0n2 00:10:35.640 [job2] 00:10:35.640 filename=/dev/nvme0n3 00:10:35.640 [job3] 00:10:35.640 filename=/dev/nvme0n4 00:10:35.640 Could not set queue depth (nvme0n1) 00:10:35.640 Could not set queue depth (nvme0n2) 00:10:35.640 Could not set queue depth (nvme0n3) 00:10:35.640 Could not set queue depth (nvme0n4) 00:10:35.908 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.908 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.908 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.908 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.908 fio-3.35 00:10:35.908 Starting 4 threads 00:10:37.311 00:10:37.311 job0: (groupid=0, jobs=1): err= 0: pid=2981742: Tue Oct 1 15:28:16 2024 00:10:37.311 read: IOPS=573, BW=2295KiB/s (2350kB/s)(2380KiB/1037msec) 00:10:37.311 slat (nsec): min=6795, max=62266, avg=26810.67, stdev=6591.81 00:10:37.311 clat (usec): min=225, max=42034, avg=1066.09, stdev=3757.50 00:10:37.311 lat (usec): min=233, max=42062, avg=1092.90, stdev=3757.61 00:10:37.311 clat percentiles (usec): 00:10:37.311 | 1.00th=[ 343], 5.00th=[ 445], 10.00th=[ 529], 20.00th=[ 611], 00:10:37.311 | 30.00th=[ 652], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 783], 00:10:37.311 | 70.00th=[ 816], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 906], 00:10:37.311 | 99.00th=[ 1029], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.311 | 99.99th=[42206] 00:10:37.311 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:10:37.311 slat (nsec): min=8866, max=55969, avg=26076.81, stdev=12448.94 00:10:37.311 clat (usec): min=108, max=679, avg=340.54, stdev=98.58 00:10:37.311 lat (usec): min=117, max=698, avg=366.62, stdev=102.21 00:10:37.311 clat percentiles (usec): 00:10:37.311 | 1.00th=[ 131], 5.00th=[ 200], 10.00th=[ 221], 20.00th=[ 265], 00:10:37.311 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 351], 00:10:37.311 | 70.00th=[ 392], 80.00th=[ 429], 90.00th=[ 474], 95.00th=[ 523], 00:10:37.311 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 627], 99.95th=[ 676], 00:10:37.311 | 99.99th=[ 676] 00:10:37.311 bw ( KiB/s): min= 4096, max= 4096, per=30.15%, avg=4096.00, stdev= 0.00, samples=2 00:10:37.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:37.311 lat (usec) : 250=10.75%, 500=51.02%, 750=20.88%, 1000=16.92% 00:10:37.311 lat (msec) : 2=0.12%, 50=0.31% 00:10:37.311 cpu : usr=2.99%, sys=5.40%, ctx=1622, majf=0, minf=1 00:10:37.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.311 issued rwts: total=595,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.311 job1: (groupid=0, jobs=1): err= 0: pid=2981752: Tue Oct 1 15:28:16 2024 00:10:37.311 read: IOPS=18, BW=75.8KiB/s (77.7kB/s)(76.0KiB/1002msec) 00:10:37.311 slat (nsec): min=9402, max=26823, avg=25605.16, stdev=3926.23 00:10:37.311 clat (usec): min=1028, max=42049, avg=35478.28, stdev=15317.93 00:10:37.311 lat (usec): min=1037, max=42076, avg=35503.88, stdev=15320.05 00:10:37.311 clat percentiles (usec): 00:10:37.311 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[ 1057], 20.00th=[41681], 00:10:37.311 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:37.311 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:37.311 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.311 | 99.99th=[42206] 00:10:37.311 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:37.311 slat (nsec): min=9919, max=58047, avg=29578.14, stdev=9769.02 00:10:37.311 clat (usec): min=277, max=1075, avg=602.34, stdev=114.21 00:10:37.311 lat (usec): min=288, max=1110, avg=631.92, stdev=119.47 00:10:37.311 clat percentiles (usec): 00:10:37.311 | 1.00th=[ 322], 5.00th=[ 371], 10.00th=[ 453], 20.00th=[ 502], 00:10:37.311 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 644], 00:10:37.311 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 750], 00:10:37.311 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 1074], 99.95th=[ 1074], 00:10:37.311 | 99.99th=[ 1074] 00:10:37.311 bw ( KiB/s): min= 4096, max= 4096, per=30.15%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.311 lat (usec) : 500=19.40%, 750=71.56%, 1000=5.27% 00:10:37.311 lat (msec) : 2=0.75%, 50=3.01% 00:10:37.311 cpu : usr=1.50%, sys=0.80%, ctx=532, majf=0, minf=2 00:10:37.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.311 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.311 job2: (groupid=0, jobs=1): err= 0: pid=2981755: Tue Oct 1 15:28:16 2024 00:10:37.311 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:37.311 slat (nsec): min=7138, max=50914, avg=26351.75, stdev=6376.20 00:10:37.311 clat (usec): min=150, max=972, avg=530.51, stdev=140.98 00:10:37.311 lat (usec): min=158, max=1000, avg=556.87, stdev=141.78 00:10:37.311 clat percentiles (usec): 00:10:37.311 | 1.00th=[ 255], 5.00th=[ 330], 10.00th=[ 379], 20.00th=[ 400], 00:10:37.311 | 30.00th=[ 416], 40.00th=[ 445], 50.00th=[ 553], 60.00th=[ 603], 00:10:37.311 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 775], 00:10:37.311 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 963], 99.95th=[ 971], 00:10:37.311 | 99.99th=[ 971] 00:10:37.311 write: IOPS=1212, BW=4851KiB/s (4968kB/s)(4856KiB/1001msec); 0 zone resets 00:10:37.311 slat (nsec): min=9289, max=58246, avg=27377.02, stdev=11449.83 00:10:37.311 clat (usec): min=90, max=889, avg=312.97, stdev=92.13 00:10:37.311 lat (usec): min=101, max=924, avg=340.34, stdev=94.18 00:10:37.311 clat percentiles (usec): 00:10:37.311 | 1.00th=[ 101], 5.00th=[ 186], 10.00th=[ 210], 20.00th=[ 265], 00:10:37.311 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 314], 00:10:37.311 | 70.00th=[ 330], 80.00th=[ 359], 90.00th=[ 429], 95.00th=[ 498], 00:10:37.311 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[ 750], 99.95th=[ 889], 00:10:37.311 | 99.99th=[ 889] 00:10:37.311 bw ( KiB/s): min= 5328, max= 5328, per=39.22%, avg=5328.00, stdev= 0.00, samples=1 00:10:37.311 iops : min= 1332, max= 1332, avg=1332.00, stdev= 0.00, samples=1 00:10:37.311 lat (usec) : 100=0.54%, 250=8.76%, 500=63.23%, 750=24.66%, 1000=2.82% 00:10:37.311 cpu : usr=3.00%, sys=6.90%, ctx=2239, majf=0, minf=1 00:10:37.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.311 issued rwts: total=1024,1214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.311 job3: (groupid=0, jobs=1): err= 0: pid=2981756: Tue Oct 1 15:28:16 2024 00:10:37.311 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:37.311 slat (nsec): min=7984, max=49530, avg=27012.17, stdev=3255.13 00:10:37.311 clat (usec): min=519, max=1233, avg=1002.43, stdev=95.79 00:10:37.312 lat (usec): min=545, max=1259, avg=1029.44, stdev=95.93 00:10:37.312 clat percentiles (usec): 00:10:37.312 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 930], 00:10:37.312 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:10:37.312 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:37.312 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:10:37.312 | 99.99th=[ 1237] 00:10:37.312 write: IOPS=771, BW=3085KiB/s (3159kB/s)(3088KiB/1001msec); 0 zone resets 00:10:37.312 slat (nsec): min=9873, max=61776, avg=31160.31, stdev=9238.74 00:10:37.312 clat (usec): min=240, max=1060, avg=568.25, stdev=113.14 00:10:37.312 lat (usec): min=266, max=1094, avg=599.41, stdev=116.39 00:10:37.312 clat percentiles (usec): 00:10:37.312 | 1.00th=[ 310], 5.00th=[ 375], 10.00th=[ 424], 20.00th=[ 469], 00:10:37.312 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:10:37.312 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 758], 00:10:37.312 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:37.312 | 99.99th=[ 1057] 00:10:37.312 bw ( KiB/s): min= 4096, max= 4096, per=30.15%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.312 lat (usec) : 250=0.08%, 500=16.98%, 750=40.42%, 1000=20.02% 00:10:37.312 lat (msec) : 2=22.51% 00:10:37.312 cpu : usr=2.00%, sys=3.90%, ctx=1286, majf=0, minf=2 00:10:37.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.312 issued rwts: total=512,772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.312 00:10:37.312 Run status group 0 (all jobs): 00:10:37.312 READ: bw=8293KiB/s (8492kB/s), 75.8KiB/s-4092KiB/s (77.7kB/s-4190kB/s), io=8600KiB (8806kB), run=1001-1037msec 00:10:37.312 WRITE: bw=13.3MiB/s (13.9MB/s), 2044KiB/s-4851KiB/s (2093kB/s-4968kB/s), io=13.8MiB (14.4MB), run=1001-1037msec 00:10:37.312 00:10:37.312 Disk stats (read/write): 00:10:37.312 nvme0n1: ios=636/1024, merge=0/0, ticks=420/271, in_queue=691, util=87.27% 00:10:37.312 nvme0n2: ios=37/512, merge=0/0, ticks=1385/298, in_queue=1683, util=88.28% 00:10:37.312 nvme0n3: ios=976/1024, merge=0/0, ticks=571/284, in_queue=855, util=95.36% 00:10:37.312 nvme0n4: ios=523/512, merge=0/0, ticks=1362/275, in_queue=1637, util=94.34% 00:10:37.312 15:28:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:37.312 [global] 00:10:37.312 thread=1 00:10:37.312 invalidate=1 00:10:37.312 rw=write 00:10:37.312 time_based=1 00:10:37.312 runtime=1 00:10:37.312 ioengine=libaio 00:10:37.312 direct=1 00:10:37.312 bs=4096 00:10:37.312 iodepth=128 00:10:37.312 norandommap=0 00:10:37.312 numjobs=1 00:10:37.312 00:10:37.312 verify_dump=1 00:10:37.312 verify_backlog=512 00:10:37.312 verify_state_save=0 00:10:37.312 do_verify=1 00:10:37.312 verify=crc32c-intel 00:10:37.312 [job0] 00:10:37.312 filename=/dev/nvme0n1 00:10:37.312 [job1] 00:10:37.312 filename=/dev/nvme0n2 00:10:37.312 [job2] 00:10:37.312 filename=/dev/nvme0n3 00:10:37.312 [job3] 00:10:37.312 filename=/dev/nvme0n4 00:10:37.312 Could not set queue depth (nvme0n1) 00:10:37.312 Could not set queue depth (nvme0n2) 00:10:37.312 Could not set queue depth (nvme0n3) 00:10:37.312 Could not set queue depth (nvme0n4) 00:10:37.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.578 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.578 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.578 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.578 fio-3.35 00:10:37.578 Starting 4 threads 00:10:38.999 00:10:38.999 job0: (groupid=0, jobs=1): err= 0: pid=2982226: Tue Oct 1 15:28:18 2024 00:10:38.999 read: IOPS=4049, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1007msec) 00:10:38.999 slat (nsec): min=954, max=16519k, avg=135216.59, stdev=856942.85 00:10:38.999 clat (usec): min=2199, max=56355, avg=16885.30, stdev=11240.93 00:10:38.999 lat (usec): min=5373, max=59630, avg=17020.52, stdev=11339.32 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 6783], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 8291], 00:10:38.999 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[10814], 60.00th=[15008], 00:10:38.999 | 70.00th=[19006], 80.00th=[27395], 90.00th=[34341], 95.00th=[41157], 00:10:38.999 | 99.00th=[51119], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:10:38.999 | 99.99th=[56361] 00:10:38.999 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:10:38.999 slat (nsec): min=1626, max=39592k, avg=105137.44, stdev=832346.04 00:10:38.999 clat (usec): min=3765, max=65685, avg=12568.68, stdev=8253.07 00:10:38.999 lat (usec): min=3773, max=65744, avg=12673.82, stdev=8336.48 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 4555], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 6915], 00:10:38.999 | 30.00th=[ 7308], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[12125], 00:10:38.999 | 70.00th=[14484], 80.00th=[15008], 90.00th=[21365], 95.00th=[32637], 00:10:38.999 | 99.00th=[48497], 99.50th=[52167], 99.90th=[53740], 99.95th=[65799], 00:10:38.999 | 99.99th=[65799] 00:10:38.999 bw ( KiB/s): min=13192, max=19576, per=16.78%, avg=16384.00, stdev=4514.17, samples=2 00:10:38.999 iops : min= 3298, max= 4894, avg=4096.00, stdev=1128.54, samples=2 00:10:38.999 lat (msec) : 4=0.22%, 10=44.47%, 20=34.84%, 50=19.29%, 100=1.17% 00:10:38.999 cpu : usr=3.48%, sys=3.58%, ctx=460, majf=0, minf=2 00:10:38.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:38.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.999 issued rwts: total=4078,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.999 job1: (groupid=0, jobs=1): err= 0: pid=2982239: Tue Oct 1 15:28:18 2024 00:10:38.999 read: IOPS=7845, BW=30.6MiB/s (32.1MB/s)(30.8MiB/1004msec) 00:10:38.999 slat (nsec): min=949, max=11129k, avg=69613.75, stdev=509732.98 00:10:38.999 clat (usec): min=2079, max=54482, avg=9533.40, stdev=7164.96 00:10:38.999 lat (usec): min=2340, max=54510, avg=9603.01, stdev=7218.36 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 4178], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 6128], 00:10:38.999 | 30.00th=[ 6390], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7963], 00:10:38.999 | 70.00th=[ 8717], 80.00th=[10290], 90.00th=[13829], 95.00th=[25822], 00:10:38.999 | 99.00th=[46924], 99.50th=[48497], 99.90th=[51119], 99.95th=[53740], 00:10:38.999 | 99.99th=[54264] 00:10:38.999 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:10:38.999 slat (nsec): min=1636, max=10397k, avg=48234.62, stdev=359931.27 00:10:38.999 clat (usec): min=661, max=18776, avg=6373.16, stdev=1842.83 00:10:38.999 lat (usec): min=669, max=18783, avg=6421.39, stdev=1864.81 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 1811], 5.00th=[ 3458], 10.00th=[ 4047], 20.00th=[ 4686], 00:10:38.999 | 30.00th=[ 5800], 40.00th=[ 6390], 50.00th=[ 6587], 60.00th=[ 6718], 00:10:38.999 | 70.00th=[ 6915], 80.00th=[ 7242], 90.00th=[ 8848], 95.00th=[ 9372], 00:10:38.999 | 99.00th=[10945], 99.50th=[12518], 99.90th=[15795], 99.95th=[15795], 00:10:38.999 | 99.99th=[18744] 00:10:38.999 bw ( KiB/s): min=28672, max=36864, per=33.57%, avg=32768.00, stdev=5792.62, samples=2 00:10:38.999 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:10:38.999 lat (usec) : 750=0.02%, 1000=0.06% 00:10:38.999 lat (msec) : 2=0.63%, 4=4.51%, 10=83.66%, 20=7.72%, 50=3.25% 00:10:38.999 lat (msec) : 100=0.16% 00:10:38.999 cpu : usr=6.18%, sys=8.57%, ctx=560, majf=0, minf=1 00:10:38.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:38.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.999 issued rwts: total=7877,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.999 job2: (groupid=0, jobs=1): err= 0: pid=2982262: Tue Oct 1 15:28:18 2024 00:10:38.999 read: IOPS=6502, BW=25.4MiB/s (26.6MB/s)(25.5MiB/1003msec) 00:10:38.999 slat (nsec): min=961, max=9698.9k, avg=74944.95, stdev=440268.66 00:10:38.999 clat (usec): min=1231, max=21952, avg=9422.47, stdev=2175.03 00:10:38.999 lat (usec): min=3630, max=21977, avg=9497.41, stdev=2203.72 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 4293], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 8291], 00:10:38.999 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:10:38.999 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[11076], 95.00th=[12256], 00:10:38.999 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:10:38.999 | 99.99th=[21890] 00:10:38.999 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:10:38.999 slat (nsec): min=1626, max=43759k, avg=70968.15, stdev=623230.12 00:10:38.999 clat (usec): min=1966, max=52264, avg=9547.41, stdev=4700.64 00:10:38.999 lat (usec): min=1974, max=52297, avg=9618.38, stdev=4738.17 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 7177], 20.00th=[ 7898], 00:10:38.999 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:10:38.999 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[12911], 95.00th=[16057], 00:10:38.999 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:10:38.999 | 99.99th=[52167] 00:10:38.999 bw ( KiB/s): min=24576, max=28672, per=27.27%, avg=26624.00, stdev=2896.31, samples=2 00:10:38.999 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:10:38.999 lat (msec) : 2=0.05%, 4=0.47%, 10=78.57%, 20=19.34%, 50=1.56% 00:10:38.999 lat (msec) : 100=0.01% 00:10:38.999 cpu : usr=4.09%, sys=5.19%, ctx=806, majf=0, minf=1 00:10:38.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:38.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.999 issued rwts: total=6522,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.999 job3: (groupid=0, jobs=1): err= 0: pid=2982271: Tue Oct 1 15:28:18 2024 00:10:38.999 read: IOPS=5209, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1007msec) 00:10:38.999 slat (nsec): min=1000, max=10822k, avg=91559.71, stdev=636852.21 00:10:38.999 clat (usec): min=1398, max=38251, avg=11123.05, stdev=4305.95 00:10:38.999 lat (usec): min=3369, max=38279, avg=11214.61, stdev=4352.37 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 5538], 5.00th=[ 6325], 10.00th=[ 7177], 20.00th=[ 8455], 00:10:38.999 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:10:38.999 | 70.00th=[11469], 80.00th=[12780], 90.00th=[14484], 95.00th=[20317], 00:10:38.999 | 99.00th=[29492], 99.50th=[29492], 99.90th=[32637], 99.95th=[32637], 00:10:38.999 | 99.99th=[38011] 00:10:38.999 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:10:38.999 slat (nsec): min=1646, max=10387k, avg=86977.62, stdev=490123.64 00:10:38.999 clat (usec): min=734, max=34545, avg=12311.75, stdev=6555.34 00:10:38.999 lat (usec): min=743, max=34553, avg=12398.73, stdev=6602.59 00:10:38.999 clat percentiles (usec): 00:10:38.999 | 1.00th=[ 3326], 5.00th=[ 4883], 10.00th=[ 5997], 20.00th=[ 7177], 00:10:38.999 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[10421], 60.00th=[12125], 00:10:38.999 | 70.00th=[14484], 80.00th=[16581], 90.00th=[22938], 95.00th=[25560], 00:10:38.999 | 99.00th=[31851], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:10:38.999 | 99.99th=[34341] 00:10:38.999 bw ( KiB/s): min=20672, max=24368, per=23.07%, avg=22520.00, stdev=2613.47, samples=2 00:10:38.999 iops : min= 5168, max= 6092, avg=5630.00, stdev=653.37, samples=2 00:10:38.999 lat (usec) : 750=0.02%, 1000=0.06% 00:10:38.999 lat (msec) : 2=0.08%, 4=1.16%, 10=44.82%, 20=43.73%, 50=10.12% 00:10:38.999 cpu : usr=3.98%, sys=6.26%, ctx=494, majf=0, minf=2 00:10:38.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:38.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.999 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.999 00:10:39.000 Run status group 0 (all jobs): 00:10:39.000 READ: bw=92.0MiB/s (96.5MB/s), 15.8MiB/s-30.6MiB/s (16.6MB/s-32.1MB/s), io=92.7MiB (97.2MB), run=1003-1007msec 00:10:39.000 WRITE: bw=95.3MiB/s (100.0MB/s), 15.9MiB/s-31.9MiB/s (16.7MB/s-33.4MB/s), io=96.0MiB (101MB), run=1003-1007msec 00:10:39.000 00:10:39.000 Disk stats (read/write): 00:10:39.000 nvme0n1: ios=3111/3327, merge=0/0, ticks=27551/19224, in_queue=46775, util=86.47% 00:10:39.000 nvme0n2: ios=6200/6622, merge=0/0, ticks=46146/37271, in_queue=83417, util=90.91% 00:10:39.000 nvme0n3: ios=5170/5616, merge=0/0, ticks=27949/30370, in_queue=58319, util=95.02% 00:10:39.000 nvme0n4: ios=4630/4919, merge=0/0, ticks=47597/48846, in_queue=96443, util=93.55% 00:10:39.000 15:28:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:39.000 [global] 00:10:39.000 thread=1 00:10:39.000 invalidate=1 00:10:39.000 rw=randwrite 00:10:39.000 time_based=1 00:10:39.000 runtime=1 00:10:39.000 ioengine=libaio 00:10:39.000 direct=1 00:10:39.000 bs=4096 00:10:39.000 iodepth=128 00:10:39.000 norandommap=0 00:10:39.000 numjobs=1 00:10:39.000 00:10:39.000 verify_dump=1 00:10:39.000 verify_backlog=512 00:10:39.000 verify_state_save=0 00:10:39.000 do_verify=1 00:10:39.000 verify=crc32c-intel 00:10:39.000 [job0] 00:10:39.000 filename=/dev/nvme0n1 00:10:39.000 [job1] 00:10:39.000 filename=/dev/nvme0n2 00:10:39.000 [job2] 00:10:39.000 filename=/dev/nvme0n3 00:10:39.000 [job3] 00:10:39.000 filename=/dev/nvme0n4 00:10:39.000 Could not set queue depth (nvme0n1) 00:10:39.000 Could not set queue depth (nvme0n2) 00:10:39.000 Could not set queue depth (nvme0n3) 00:10:39.000 Could not set queue depth (nvme0n4) 00:10:39.266 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.266 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.266 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.266 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.266 fio-3.35 00:10:39.266 Starting 4 threads 00:10:40.668 00:10:40.668 job0: (groupid=0, jobs=1): err= 0: pid=2982748: Tue Oct 1 15:28:19 2024 00:10:40.669 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec) 00:10:40.669 slat (nsec): min=921, max=6846.9k, avg=59216.27, stdev=421519.76 00:10:40.669 clat (usec): min=2688, max=13878, avg=7614.81, stdev=1572.33 00:10:40.669 lat (usec): min=2693, max=14629, avg=7674.03, stdev=1602.20 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 4883], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6652], 00:10:40.669 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:10:40.669 | 70.00th=[ 7832], 80.00th=[ 8356], 90.00th=[ 9765], 95.00th=[11207], 00:10:40.669 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13435], 99.95th=[13566], 00:10:40.669 | 99.99th=[13829] 00:10:40.669 write: IOPS=9043, BW=35.3MiB/s (37.0MB/s)(35.4MiB/1003msec); 0 zone resets 00:10:40.669 slat (nsec): min=1560, max=5634.4k, avg=48973.33, stdev=251980.83 00:10:40.669 clat (usec): min=1176, max=13457, avg=6721.16, stdev=1327.33 00:10:40.669 lat (usec): min=1186, max=13459, avg=6770.14, stdev=1345.08 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 2376], 5.00th=[ 3818], 10.00th=[ 4752], 20.00th=[ 6259], 00:10:40.669 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7111], 00:10:40.669 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 8029], 00:10:40.669 | 99.00th=[ 9372], 99.50th=[10028], 99.90th=[13435], 99.95th=[13435], 00:10:40.669 | 99.99th=[13435] 00:10:40.669 bw ( KiB/s): min=34688, max=36864, per=34.06%, avg=35776.00, stdev=1538.66, samples=2 00:10:40.669 iops : min= 8672, max= 9216, avg=8944.00, stdev=384.67, samples=2 00:10:40.669 lat (msec) : 2=0.16%, 4=3.15%, 10=92.20%, 20=4.49% 00:10:40.669 cpu : usr=5.59%, sys=7.49%, ctx=988, majf=0, minf=1 00:10:40.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.669 issued rwts: total=8704,9071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.669 job1: (groupid=0, jobs=1): err= 0: pid=2982761: Tue Oct 1 15:28:19 2024 00:10:40.669 read: IOPS=8669, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1004msec) 00:10:40.669 slat (nsec): min=936, max=6382.7k, avg=59201.87, stdev=410285.72 00:10:40.669 clat (usec): min=2894, max=15631, avg=7575.28, stdev=1386.90 00:10:40.669 lat (usec): min=2903, max=15640, avg=7634.49, stdev=1424.05 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 5145], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6718], 00:10:40.669 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:10:40.669 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 9372], 95.00th=[10683], 00:10:40.669 | 99.00th=[12125], 99.50th=[12649], 99.90th=[14222], 99.95th=[15664], 00:10:40.669 | 99.99th=[15664] 00:10:40.669 write: IOPS=9074, BW=35.4MiB/s (37.2MB/s)(35.6MiB/1004msec); 0 zone resets 00:10:40.669 slat (nsec): min=1571, max=5752.0k, avg=48440.40, stdev=274351.27 00:10:40.669 clat (usec): min=2281, max=13080, avg=6731.66, stdev=1285.50 00:10:40.669 lat (usec): min=2288, max=13082, avg=6780.10, stdev=1306.61 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 3130], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 6128], 00:10:40.669 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:10:40.669 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 8979], 00:10:40.669 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[12518], 99.95th=[12911], 00:10:40.669 | 99.99th=[13042] 00:10:40.669 bw ( KiB/s): min=35008, max=36864, per=34.21%, avg=35936.00, stdev=1312.39, samples=2 00:10:40.669 iops : min= 8752, max= 9216, avg=8984.00, stdev=328.10, samples=2 00:10:40.669 lat (msec) : 4=2.34%, 10=94.25%, 20=3.41% 00:10:40.669 cpu : usr=5.38%, sys=8.77%, ctx=903, majf=0, minf=2 00:10:40.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.669 issued rwts: total=8704,9111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.669 job2: (groupid=0, jobs=1): err= 0: pid=2982780: Tue Oct 1 15:28:19 2024 00:10:40.669 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:10:40.669 slat (nsec): min=1035, max=10789k, avg=100058.97, stdev=754350.95 00:10:40.669 clat (usec): min=4518, max=31260, avg=12820.94, stdev=4246.70 00:10:40.669 lat (usec): min=4523, max=31286, avg=12921.00, stdev=4309.29 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 8356], 20.00th=[ 8848], 00:10:40.669 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[12649], 60.00th=[13698], 00:10:40.669 | 70.00th=[14615], 80.00th=[16057], 90.00th=[20317], 95.00th=[20841], 00:10:40.669 | 99.00th=[22414], 99.50th=[22676], 99.90th=[25297], 99.95th=[29492], 00:10:40.669 | 99.99th=[31327] 00:10:40.669 write: IOPS=4203, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1010msec); 0 zone resets 00:10:40.669 slat (nsec): min=1692, max=13217k, avg=133765.91, stdev=773888.52 00:10:40.669 clat (usec): min=1365, max=78082, avg=17790.14, stdev=14151.69 00:10:40.669 lat (usec): min=1376, max=78091, avg=17923.91, stdev=14228.76 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 6325], 20.00th=[ 7963], 00:10:40.669 | 30.00th=[11207], 40.00th=[14091], 50.00th=[15008], 60.00th=[15664], 00:10:40.669 | 70.00th=[16057], 80.00th=[21627], 90.00th=[34866], 95.00th=[52691], 00:10:40.669 | 99.00th=[73925], 99.50th=[74974], 99.90th=[78119], 99.95th=[78119], 00:10:40.669 | 99.99th=[78119] 00:10:40.669 bw ( KiB/s): min=16440, max=16504, per=15.68%, avg=16472.00, stdev=45.25, samples=2 00:10:40.669 iops : min= 4110, max= 4126, avg=4118.00, stdev=11.31, samples=2 00:10:40.669 lat (msec) : 2=0.02%, 4=0.31%, 10=30.59%, 20=53.45%, 50=12.93% 00:10:40.669 lat (msec) : 100=2.69% 00:10:40.669 cpu : usr=2.48%, sys=5.45%, ctx=358, majf=0, minf=1 00:10:40.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.669 issued rwts: total=4096,4246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.669 job3: (groupid=0, jobs=1): err= 0: pid=2982787: Tue Oct 1 15:28:19 2024 00:10:40.669 read: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1007msec) 00:10:40.669 slat (nsec): min=1049, max=25495k, avg=148111.28, stdev=1050410.13 00:10:40.669 clat (usec): min=2366, max=83332, avg=16195.54, stdev=12796.72 00:10:40.669 lat (usec): min=4365, max=83338, avg=16343.65, stdev=12915.30 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 5800], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 8848], 00:10:40.669 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[11731], 60.00th=[13042], 00:10:40.669 | 70.00th=[17171], 80.00th=[19006], 90.00th=[28705], 95.00th=[41681], 00:10:40.669 | 99.00th=[78119], 99.50th=[78119], 99.90th=[83362], 99.95th=[83362], 00:10:40.669 | 99.99th=[83362] 00:10:40.669 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:10:40.669 slat (nsec): min=1680, max=11421k, avg=95828.14, stdev=589489.85 00:10:40.669 clat (usec): min=1210, max=83324, avg=15497.95, stdev=11646.12 00:10:40.669 lat (usec): min=1220, max=83334, avg=15593.78, stdev=11697.17 00:10:40.669 clat percentiles (usec): 00:10:40.669 | 1.00th=[ 4146], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 8225], 00:10:40.669 | 30.00th=[10683], 40.00th=[11600], 50.00th=[13304], 60.00th=[15139], 00:10:40.669 | 70.00th=[15664], 80.00th=[16188], 90.00th=[26346], 95.00th=[42730], 00:10:40.669 | 99.00th=[68682], 99.50th=[69731], 99.90th=[72877], 99.95th=[80217], 00:10:40.669 | 99.99th=[83362] 00:10:40.669 bw ( KiB/s): min=16384, max=16384, per=15.60%, avg=16384.00, stdev= 0.00, samples=2 00:10:40.669 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:40.669 lat (msec) : 2=0.02%, 4=0.48%, 10=34.92%, 20=48.60%, 50=12.65% 00:10:40.669 lat (msec) : 100=3.33% 00:10:40.669 cpu : usr=3.88%, sys=4.17%, ctx=350, majf=0, minf=1 00:10:40.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.669 issued rwts: total=3962,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.669 00:10:40.669 Run status group 0 (all jobs): 00:10:40.669 READ: bw=98.5MiB/s (103MB/s), 15.4MiB/s-33.9MiB/s (16.1MB/s-35.5MB/s), io=99.5MiB (104MB), run=1003-1010msec 00:10:40.669 WRITE: bw=103MiB/s (108MB/s), 15.9MiB/s-35.4MiB/s (16.7MB/s-37.2MB/s), io=104MiB (109MB), run=1003-1010msec 00:10:40.669 00:10:40.669 Disk stats (read/write): 00:10:40.669 nvme0n1: ios=7192/7664, merge=0/0, ticks=46152/41720, in_queue=87872, util=84.27% 00:10:40.669 nvme0n2: ios=7316/7680, merge=0/0, ticks=44988/41493, in_queue=86481, util=91.23% 00:10:40.669 nvme0n3: ios=3378/3584, merge=0/0, ticks=39749/61594, in_queue=101343, util=95.36% 00:10:40.669 nvme0n4: ios=3121/3422, merge=0/0, ticks=49752/52373, in_queue=102125, util=97.33% 00:10:40.669 15:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:40.669 15:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2982844 00:10:40.669 15:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:40.669 15:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:40.669 [global] 00:10:40.669 thread=1 00:10:40.669 invalidate=1 00:10:40.669 rw=read 00:10:40.669 time_based=1 00:10:40.669 runtime=10 00:10:40.669 ioengine=libaio 00:10:40.669 direct=1 00:10:40.669 bs=4096 00:10:40.669 iodepth=1 00:10:40.669 norandommap=1 00:10:40.669 numjobs=1 00:10:40.669 00:10:40.669 [job0] 00:10:40.669 filename=/dev/nvme0n1 00:10:40.669 [job1] 00:10:40.669 filename=/dev/nvme0n2 00:10:40.669 [job2] 00:10:40.669 filename=/dev/nvme0n3 00:10:40.669 [job3] 00:10:40.669 filename=/dev/nvme0n4 00:10:40.669 Could not set queue depth (nvme0n1) 00:10:40.669 Could not set queue depth (nvme0n2) 00:10:40.669 Could not set queue depth (nvme0n3) 00:10:40.669 Could not set queue depth (nvme0n4) 00:10:40.937 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.937 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.937 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.937 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.937 fio-3.35 00:10:40.937 Starting 4 threads 00:10:43.475 15:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:43.734 15:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:43.734 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:10:43.734 fio: pid=2983287, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:43.734 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1183744, buflen=4096 00:10:43.734 fio: pid=2983274, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:43.734 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:43.734 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:43.994 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:43.994 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:43.994 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:10:43.994 fio: pid=2983203, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:44.256 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.256 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:44.256 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8060928, buflen=4096 00:10:44.256 fio: pid=2983233, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:44.256 00:10:44.256 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2983203: Tue Oct 1 15:28:23 2024 00:10:44.256 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(284KiB/2910msec) 00:10:44.256 slat (usec): min=25, max=216, avg=28.24, stdev=22.56 00:10:44.256 clat (usec): min=627, max=42157, avg=40646.78, stdev=6854.32 00:10:44.256 lat (usec): min=666, max=42183, avg=40675.05, stdev=6853.76 00:10:44.256 clat percentiles (usec): 00:10:44.256 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:10:44.256 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:44.256 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:44.256 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:44.256 | 99.99th=[42206] 00:10:44.256 bw ( KiB/s): min= 96, max= 104, per=3.15%, avg=97.60, stdev= 3.58, samples=5 00:10:44.256 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:44.256 lat (usec) : 750=1.39%, 1000=1.39% 00:10:44.256 lat (msec) : 50=95.83% 00:10:44.256 cpu : usr=0.00%, sys=0.10%, ctx=73, majf=0, minf=1 00:10:44.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.256 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.256 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2983233: Tue Oct 1 15:28:23 2024 00:10:44.256 read: IOPS=635, BW=2539KiB/s (2600kB/s)(7872KiB/3100msec) 00:10:44.256 slat (usec): min=7, max=18597, avg=41.73, stdev=474.03 00:10:44.256 clat (usec): min=362, max=42170, avg=1515.61, stdev=4256.59 00:10:44.256 lat (usec): min=388, max=60009, avg=1557.35, stdev=4369.62 00:10:44.256 clat percentiles (usec): 00:10:44.256 | 1.00th=[ 709], 5.00th=[ 865], 10.00th=[ 947], 20.00th=[ 996], 00:10:44.256 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:10:44.256 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:10:44.256 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:44.256 | 99.99th=[42206] 00:10:44.256 bw ( KiB/s): min= 104, max= 3664, per=84.50%, avg=2606.00, stdev=1405.79, samples=6 00:10:44.256 iops : min= 26, max= 916, avg=651.50, stdev=351.45, samples=6 00:10:44.256 lat (usec) : 500=0.05%, 750=1.32%, 1000=19.76% 00:10:44.256 lat (msec) : 2=77.45%, 4=0.10%, 10=0.15%, 50=1.12% 00:10:44.256 cpu : usr=0.74%, sys=1.84%, ctx=1975, majf=0, minf=2 00:10:44.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.256 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.256 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2983274: Tue Oct 1 15:28:23 2024 00:10:44.256 read: IOPS=106, BW=426KiB/s (436kB/s)(1156KiB/2716msec) 00:10:44.256 slat (nsec): min=8222, max=57786, avg=25775.37, stdev=3390.81 00:10:44.256 clat (usec): min=749, max=42058, avg=9289.87, stdev=16366.37 00:10:44.256 lat (usec): min=785, max=42084, avg=9315.64, stdev=16366.11 00:10:44.256 clat percentiles (usec): 00:10:44.256 | 1.00th=[ 783], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1057], 00:10:44.256 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:10:44.256 | 70.00th=[ 1205], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:44.256 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:44.256 | 99.99th=[42206] 00:10:44.256 bw ( KiB/s): min= 96, max= 584, per=6.26%, avg=193.60, stdev=218.24, samples=5 00:10:44.256 iops : min= 24, max= 146, avg=48.40, stdev=54.56, samples=5 00:10:44.256 lat (usec) : 750=0.34%, 1000=12.07% 00:10:44.256 lat (msec) : 2=67.24%, 50=20.00% 00:10:44.256 cpu : usr=0.04%, sys=0.41%, ctx=291, majf=0, minf=1 00:10:44.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 issued rwts: total=290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.256 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.256 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2983287: Tue Oct 1 15:28:23 2024 00:10:44.256 read: IOPS=24, BW=96.6KiB/s (99.0kB/s)(248KiB/2566msec) 00:10:44.256 slat (nsec): min=8144, max=33984, avg=26793.17, stdev=2562.56 00:10:44.256 clat (usec): min=426, max=42066, avg=41002.47, stdev=5256.54 00:10:44.256 lat (usec): min=460, max=42093, avg=41029.25, stdev=5255.67 00:10:44.256 clat percentiles (usec): 00:10:44.256 | 1.00th=[ 429], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:44.256 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:44.256 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:44.256 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:44.256 | 99.99th=[42206] 00:10:44.256 bw ( KiB/s): min= 96, max= 104, per=3.15%, avg=97.60, stdev= 3.58, samples=5 00:10:44.256 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:44.256 lat (usec) : 500=1.59% 00:10:44.256 lat (msec) : 50=96.83% 00:10:44.256 cpu : usr=0.00%, sys=0.12%, ctx=68, majf=0, minf=2 00:10:44.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.256 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.256 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.256 00:10:44.256 Run status group 0 (all jobs): 00:10:44.257 READ: bw=3084KiB/s (3158kB/s), 96.6KiB/s-2539KiB/s (99.0kB/s-2600kB/s), io=9560KiB (9789kB), run=2566-3100msec 00:10:44.257 00:10:44.257 Disk stats (read/write): 00:10:44.257 nvme0n1: ios=68/0, merge=0/0, ticks=2762/0, in_queue=2762, util=92.69% 00:10:44.257 nvme0n2: ios=1966/0, merge=0/0, ticks=2828/0, in_queue=2828, util=93.31% 00:10:44.257 nvme0n3: ios=140/0, merge=0/0, ticks=2516/0, in_queue=2516, util=95.46% 00:10:44.257 nvme0n4: ios=100/0, merge=0/0, ticks=3363/0, in_queue=3363, util=98.78% 00:10:44.257 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.257 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:44.517 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.517 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:44.777 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.777 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:44.777 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.777 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:45.036 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:45.036 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2982844 00:10:45.036 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:45.036 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.036 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.036 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.036 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.295 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.295 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.295 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.295 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:45.295 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:45.295 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:45.296 nvmf hotplug test: fio failed as expected 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.296 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.296 rmmod nvme_tcp 00:10:45.296 rmmod nvme_fabrics 00:10:45.555 rmmod nvme_keyring 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 2979308 ']' 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 2979308 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2979308 ']' 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2979308 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2979308 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2979308' 00:10:45.555 killing process with pid 2979308 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2979308 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2979308 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.555 15:28:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.096 00:10:48.096 real 0m29.204s 00:10:48.096 user 2m30.108s 00:10:48.096 sys 0m9.535s 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.096 ************************************ 00:10:48.096 END TEST nvmf_fio_target 00:10:48.096 ************************************ 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.096 ************************************ 00:10:48.096 START TEST nvmf_bdevio 00:10:48.096 ************************************ 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:48.096 * Looking for test storage... 00:10:48.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.096 --rc genhtml_branch_coverage=1 00:10:48.096 --rc genhtml_function_coverage=1 00:10:48.096 --rc genhtml_legend=1 00:10:48.096 --rc geninfo_all_blocks=1 00:10:48.096 --rc geninfo_unexecuted_blocks=1 00:10:48.096 00:10:48.096 ' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.096 --rc genhtml_branch_coverage=1 00:10:48.096 --rc genhtml_function_coverage=1 00:10:48.096 --rc genhtml_legend=1 00:10:48.096 --rc geninfo_all_blocks=1 00:10:48.096 --rc geninfo_unexecuted_blocks=1 00:10:48.096 00:10:48.096 ' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.096 --rc genhtml_branch_coverage=1 00:10:48.096 --rc genhtml_function_coverage=1 00:10:48.096 --rc genhtml_legend=1 00:10:48.096 --rc geninfo_all_blocks=1 00:10:48.096 --rc geninfo_unexecuted_blocks=1 00:10:48.096 00:10:48.096 ' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.096 --rc genhtml_branch_coverage=1 00:10:48.096 --rc genhtml_function_coverage=1 00:10:48.096 --rc genhtml_legend=1 00:10:48.096 --rc geninfo_all_blocks=1 00:10:48.096 --rc geninfo_unexecuted_blocks=1 00:10:48.096 00:10:48.096 ' 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:48.096 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.097 15:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:56.227 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:56.227 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:56.227 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:56.228 Found net devices under 0000:31:00.0: cvl_0_0 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:56.228 Found net devices under 0000:31:00.1: cvl_0_1 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:10:56.228 00:10:56.228 --- 10.0.0.2 ping statistics --- 00:10:56.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.228 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:10:56.228 00:10:56.228 --- 10.0.0.1 ping statistics --- 00:10:56.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.228 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.228 15:28:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=2988429 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 2988429 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2988429 ']' 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.228 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.228 [2024-10-01 15:28:35.113214] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:10:56.228 [2024-10-01 15:28:35.113278] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.228 [2024-10-01 15:28:35.155964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:56.228 [2024-10-01 15:28:35.204085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.228 [2024-10-01 15:28:35.252640] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.228 [2024-10-01 15:28:35.252703] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.228 [2024-10-01 15:28:35.252711] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.228 [2024-10-01 15:28:35.252719] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.228 [2024-10-01 15:28:35.252725] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.228 [2024-10-01 15:28:35.252912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:56.228 [2024-10-01 15:28:35.253049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:56.228 [2024-10-01 15:28:35.253201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.228 [2024-10-01 15:28:35.253201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:56.490 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.490 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:56.490 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:56.490 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.490 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.751 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.751 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.751 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.751 15:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.751 [2024-10-01 15:28:35.994338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.751 Malloc0 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.751 [2024-10-01 15:28:36.059408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:56.751 { 00:10:56.751 "params": { 00:10:56.751 "name": "Nvme$subsystem", 00:10:56.751 "trtype": "$TEST_TRANSPORT", 00:10:56.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.751 "adrfam": "ipv4", 00:10:56.751 "trsvcid": "$NVMF_PORT", 00:10:56.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.751 "hdgst": ${hdgst:-false}, 00:10:56.751 "ddgst": ${ddgst:-false} 00:10:56.751 }, 00:10:56.751 "method": "bdev_nvme_attach_controller" 00:10:56.751 } 00:10:56.751 EOF 00:10:56.751 )") 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:56.751 15:28:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:56.751 "params": { 00:10:56.751 "name": "Nvme1", 00:10:56.751 "trtype": "tcp", 00:10:56.751 "traddr": "10.0.0.2", 00:10:56.751 "adrfam": "ipv4", 00:10:56.751 "trsvcid": "4420", 00:10:56.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.751 "hdgst": false, 00:10:56.751 "ddgst": false 00:10:56.751 }, 00:10:56.751 "method": "bdev_nvme_attach_controller" 00:10:56.751 }' 00:10:56.751 [2024-10-01 15:28:36.115478] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:10:56.751 [2024-10-01 15:28:36.115554] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988693 ] 00:10:56.751 [2024-10-01 15:28:36.151283] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:56.752 [2024-10-01 15:28:36.199872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.012 [2024-10-01 15:28:36.248874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.012 [2024-10-01 15:28:36.249012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.012 [2024-10-01 15:28:36.249158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.012 I/O targets: 00:10:57.012 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:57.012 00:10:57.012 00:10:57.012 CUnit - A unit testing framework for C - Version 2.1-3 00:10:57.012 http://cunit.sourceforge.net/ 00:10:57.012 00:10:57.012 00:10:57.012 Suite: bdevio tests on: Nvme1n1 00:10:57.272 Test: blockdev write read block ...passed 00:10:57.273 Test: blockdev write zeroes read block ...passed 00:10:57.273 Test: blockdev write zeroes read no split ...passed 00:10:57.273 Test: blockdev write zeroes read split ...passed 00:10:57.273 Test: blockdev write zeroes read split partial ...passed 00:10:57.273 Test: blockdev reset ...[2024-10-01 15:28:36.571007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:57.273 [2024-10-01 15:28:36.571111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1c50 (9): Bad file descriptor 00:10:57.273 [2024-10-01 15:28:36.629465] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:57.273 passed 00:10:57.273 Test: blockdev write read 8 blocks ...passed 00:10:57.273 Test: blockdev write read size > 128k ...passed 00:10:57.273 Test: blockdev write read invalid size ...passed 00:10:57.273 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:57.273 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:57.273 Test: blockdev write read max offset ...passed 00:10:57.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:57.533 Test: blockdev writev readv 8 blocks ...passed 00:10:57.533 Test: blockdev writev readv 30 x 1block ...passed 00:10:57.533 Test: blockdev writev readv block ...passed 00:10:57.533 Test: blockdev writev readv size > 128k ...passed 00:10:57.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:57.533 Test: blockdev comparev and writev ...[2024-10-01 15:28:36.854315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.854372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.854391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.854401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.854956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.854973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.854988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.854998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.855513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.855528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.855544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.855558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.856082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.856097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.856111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:57.533 [2024-10-01 15:28:36.856119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:57.533 passed 00:10:57.533 Test: blockdev nvme passthru rw ...passed 00:10:57.533 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:28:36.940808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:57.533 [2024-10-01 15:28:36.940826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.941195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:57.533 [2024-10-01 15:28:36.941209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.941587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:57.533 [2024-10-01 15:28:36.941599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:57.533 [2024-10-01 15:28:36.941969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:57.533 [2024-10-01 15:28:36.941981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:57.533 passed 00:10:57.533 Test: blockdev nvme admin passthru ...passed 00:10:57.794 Test: blockdev copy ...passed 00:10:57.794 00:10:57.794 Run Summary: Type Total Ran Passed Failed Inactive 00:10:57.794 suites 1 1 n/a 0 0 00:10:57.794 tests 23 23 23 0 0 00:10:57.794 asserts 152 152 152 0 n/a 00:10:57.794 00:10:57.794 Elapsed time = 1.133 seconds 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.794 rmmod nvme_tcp 00:10:57.794 rmmod nvme_fabrics 00:10:57.794 rmmod nvme_keyring 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 2988429 ']' 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 2988429 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2988429 ']' 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2988429 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.794 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2988429 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2988429' 00:10:58.054 killing process with pid 2988429 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2988429 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2988429 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:58.054 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.055 15:28:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.595 15:28:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.595 00:11:00.595 real 0m12.459s 00:11:00.595 user 0m13.072s 00:11:00.595 sys 0m6.463s 00:11:00.595 15:28:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.595 15:28:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.595 ************************************ 00:11:00.595 END TEST nvmf_bdevio 00:11:00.595 ************************************ 00:11:00.595 15:28:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:00.595 00:11:00.595 real 5m7.940s 00:11:00.595 user 11m41.753s 00:11:00.595 sys 1m53.341s 00:11:00.595 15:28:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.595 15:28:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.595 ************************************ 00:11:00.595 END TEST nvmf_target_core 00:11:00.595 ************************************ 00:11:00.595 15:28:39 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:00.595 15:28:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:00.596 15:28:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.596 15:28:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.596 ************************************ 00:11:00.596 START TEST nvmf_target_extra 00:11:00.596 ************************************ 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:00.596 * Looking for test storage... 00:11:00.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:00.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.596 --rc genhtml_branch_coverage=1 00:11:00.596 --rc genhtml_function_coverage=1 00:11:00.596 --rc genhtml_legend=1 00:11:00.596 --rc geninfo_all_blocks=1 00:11:00.596 --rc geninfo_unexecuted_blocks=1 00:11:00.596 00:11:00.596 ' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:00.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.596 --rc genhtml_branch_coverage=1 00:11:00.596 --rc genhtml_function_coverage=1 00:11:00.596 --rc genhtml_legend=1 00:11:00.596 --rc geninfo_all_blocks=1 00:11:00.596 --rc geninfo_unexecuted_blocks=1 00:11:00.596 00:11:00.596 ' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:00.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.596 --rc genhtml_branch_coverage=1 00:11:00.596 --rc genhtml_function_coverage=1 00:11:00.596 --rc genhtml_legend=1 00:11:00.596 --rc geninfo_all_blocks=1 00:11:00.596 --rc geninfo_unexecuted_blocks=1 00:11:00.596 00:11:00.596 ' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:00.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.596 --rc genhtml_branch_coverage=1 00:11:00.596 --rc genhtml_function_coverage=1 00:11:00.596 --rc genhtml_legend=1 00:11:00.596 --rc geninfo_all_blocks=1 00:11:00.596 --rc geninfo_unexecuted_blocks=1 00:11:00.596 00:11:00.596 ' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.596 15:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.597 ************************************ 00:11:00.597 START TEST nvmf_example 00:11:00.597 ************************************ 00:11:00.597 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:00.597 * Looking for test storage... 00:11:00.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.857 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:00.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.858 --rc genhtml_branch_coverage=1 00:11:00.858 --rc genhtml_function_coverage=1 00:11:00.858 --rc genhtml_legend=1 00:11:00.858 --rc geninfo_all_blocks=1 00:11:00.858 --rc geninfo_unexecuted_blocks=1 00:11:00.858 00:11:00.858 ' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:00.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.858 --rc genhtml_branch_coverage=1 00:11:00.858 --rc genhtml_function_coverage=1 00:11:00.858 --rc genhtml_legend=1 00:11:00.858 --rc geninfo_all_blocks=1 00:11:00.858 --rc geninfo_unexecuted_blocks=1 00:11:00.858 00:11:00.858 ' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:00.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.858 --rc genhtml_branch_coverage=1 00:11:00.858 --rc genhtml_function_coverage=1 00:11:00.858 --rc genhtml_legend=1 00:11:00.858 --rc geninfo_all_blocks=1 00:11:00.858 --rc geninfo_unexecuted_blocks=1 00:11:00.858 00:11:00.858 ' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:00.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.858 --rc genhtml_branch_coverage=1 00:11:00.858 --rc genhtml_function_coverage=1 00:11:00.858 --rc genhtml_legend=1 00:11:00.858 --rc geninfo_all_blocks=1 00:11:00.858 --rc geninfo_unexecuted_blocks=1 00:11:00.858 00:11:00.858 ' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.858 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.859 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:08.992 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:08.992 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:08.992 Found net devices under 0000:31:00.0: cvl_0_0 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:08.992 Found net devices under 0000:31:00.1: cvl_0_1 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.992 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:11:08.993 00:11:08.993 --- 10.0.0.2 ping statistics --- 00:11:08.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.993 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:11:08.993 00:11:08.993 --- 10.0.0.1 ping statistics --- 00:11:08.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.993 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2993265 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2993265 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2993265 ']' 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.993 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.564 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.564 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:09.564 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.565 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.565 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:09.565 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:21.792 Initializing NVMe Controllers 00:11:21.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:21.792 Initialization complete. Launching workers. 00:11:21.792 ======================================================== 00:11:21.792 Latency(us) 00:11:21.792 Device Information : IOPS MiB/s Average min max 00:11:21.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19533.88 76.30 3276.08 625.78 15468.07 00:11:21.792 ======================================================== 00:11:21.792 Total : 19533.88 76.30 3276.08 625.78 15468.07 00:11:21.792 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.792 rmmod nvme_tcp 00:11:21.792 rmmod nvme_fabrics 00:11:21.792 rmmod nvme_keyring 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 2993265 ']' 00:11:21.792 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 2993265 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2993265 ']' 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2993265 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2993265 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2993265' 00:11:21.793 killing process with pid 2993265 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2993265 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2993265 00:11:21.793 nvmf threads initialize successfully 00:11:21.793 bdev subsystem init successfully 00:11:21.793 created a nvmf target service 00:11:21.793 create targets's poll groups done 00:11:21.793 all subsystems of target started 00:11:21.793 nvmf target is running 00:11:21.793 all subsystems of target stopped 00:11:21.793 destroy targets's poll groups done 00:11:21.793 destroyed the nvmf target service 00:11:21.793 bdev subsystem finish successfully 00:11:21.793 nvmf threads destroy successfully 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.793 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.362 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.362 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:22.362 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.362 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.362 00:11:22.362 real 0m21.726s 00:11:22.362 user 0m46.701s 00:11:22.363 sys 0m7.263s 00:11:22.363 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.363 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.363 ************************************ 00:11:22.363 END TEST nvmf_example 00:11:22.363 ************************************ 00:11:22.363 15:29:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:22.363 15:29:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.363 15:29:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.363 15:29:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.363 ************************************ 00:11:22.363 START TEST nvmf_filesystem 00:11:22.363 ************************************ 00:11:22.363 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:22.626 * Looking for test storage... 00:11:22.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:22.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.626 --rc genhtml_branch_coverage=1 00:11:22.626 --rc genhtml_function_coverage=1 00:11:22.626 --rc genhtml_legend=1 00:11:22.626 --rc geninfo_all_blocks=1 00:11:22.626 --rc geninfo_unexecuted_blocks=1 00:11:22.626 00:11:22.626 ' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:22.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.626 --rc genhtml_branch_coverage=1 00:11:22.626 --rc genhtml_function_coverage=1 00:11:22.626 --rc genhtml_legend=1 00:11:22.626 --rc geninfo_all_blocks=1 00:11:22.626 --rc geninfo_unexecuted_blocks=1 00:11:22.626 00:11:22.626 ' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:22.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.626 --rc genhtml_branch_coverage=1 00:11:22.626 --rc genhtml_function_coverage=1 00:11:22.626 --rc genhtml_legend=1 00:11:22.626 --rc geninfo_all_blocks=1 00:11:22.626 --rc geninfo_unexecuted_blocks=1 00:11:22.626 00:11:22.626 ' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:22.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.626 --rc genhtml_branch_coverage=1 00:11:22.626 --rc genhtml_function_coverage=1 00:11:22.626 --rc genhtml_legend=1 00:11:22.626 --rc geninfo_all_blocks=1 00:11:22.626 --rc geninfo_unexecuted_blocks=1 00:11:22.626 00:11:22.626 ' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:22.626 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:22.627 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:22.628 #define SPDK_CONFIG_H 00:11:22.628 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:22.628 #define SPDK_CONFIG_APPS 1 00:11:22.628 #define SPDK_CONFIG_ARCH native 00:11:22.628 #undef SPDK_CONFIG_ASAN 00:11:22.628 #undef SPDK_CONFIG_AVAHI 00:11:22.628 #undef SPDK_CONFIG_CET 00:11:22.628 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:22.628 #define SPDK_CONFIG_COVERAGE 1 00:11:22.628 #define SPDK_CONFIG_CROSS_PREFIX 00:11:22.628 #undef SPDK_CONFIG_CRYPTO 00:11:22.628 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:22.628 #undef SPDK_CONFIG_CUSTOMOCF 00:11:22.628 #undef SPDK_CONFIG_DAOS 00:11:22.628 #define SPDK_CONFIG_DAOS_DIR 00:11:22.628 #define SPDK_CONFIG_DEBUG 1 00:11:22.628 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:22.628 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:22.628 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:22.628 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.628 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:22.628 #undef SPDK_CONFIG_DPDK_UADK 00:11:22.628 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:22.628 #define SPDK_CONFIG_EXAMPLES 1 00:11:22.628 #undef SPDK_CONFIG_FC 00:11:22.628 #define SPDK_CONFIG_FC_PATH 00:11:22.628 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:22.628 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:22.628 #define SPDK_CONFIG_FSDEV 1 00:11:22.628 #undef SPDK_CONFIG_FUSE 00:11:22.628 #undef SPDK_CONFIG_FUZZER 00:11:22.628 #define SPDK_CONFIG_FUZZER_LIB 00:11:22.628 #undef SPDK_CONFIG_GOLANG 00:11:22.628 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:22.628 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:22.628 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:22.628 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:22.628 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:22.628 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:22.628 #undef SPDK_CONFIG_HAVE_LZ4 00:11:22.628 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:22.628 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:22.628 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:22.628 #define SPDK_CONFIG_IDXD 1 00:11:22.628 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:22.628 #undef SPDK_CONFIG_IPSEC_MB 00:11:22.628 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:22.628 #define SPDK_CONFIG_ISAL 1 00:11:22.628 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:22.628 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:22.628 #define SPDK_CONFIG_LIBDIR 00:11:22.628 #undef SPDK_CONFIG_LTO 00:11:22.628 #define SPDK_CONFIG_MAX_LCORES 128 00:11:22.628 #define SPDK_CONFIG_NVME_CUSE 1 00:11:22.628 #undef SPDK_CONFIG_OCF 00:11:22.628 #define SPDK_CONFIG_OCF_PATH 00:11:22.628 #define SPDK_CONFIG_OPENSSL_PATH 00:11:22.628 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:22.628 #define SPDK_CONFIG_PGO_DIR 00:11:22.628 #undef SPDK_CONFIG_PGO_USE 00:11:22.628 #define SPDK_CONFIG_PREFIX /usr/local 00:11:22.628 #undef SPDK_CONFIG_RAID5F 00:11:22.628 #undef SPDK_CONFIG_RBD 00:11:22.628 #define SPDK_CONFIG_RDMA 1 00:11:22.628 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:22.628 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:22.628 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:22.628 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:22.628 #define SPDK_CONFIG_SHARED 1 00:11:22.628 #undef SPDK_CONFIG_SMA 00:11:22.628 #define SPDK_CONFIG_TESTS 1 00:11:22.628 #undef SPDK_CONFIG_TSAN 00:11:22.628 #define SPDK_CONFIG_UBLK 1 00:11:22.628 #define SPDK_CONFIG_UBSAN 1 00:11:22.628 #undef SPDK_CONFIG_UNIT_TESTS 00:11:22.628 #undef SPDK_CONFIG_URING 00:11:22.628 #define SPDK_CONFIG_URING_PATH 00:11:22.628 #undef SPDK_CONFIG_URING_ZNS 00:11:22.628 #undef SPDK_CONFIG_USDT 00:11:22.628 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:22.628 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:22.628 #define SPDK_CONFIG_VFIO_USER 1 00:11:22.628 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:22.628 #define SPDK_CONFIG_VHOST 1 00:11:22.628 #define SPDK_CONFIG_VIRTIO 1 00:11:22.628 #undef SPDK_CONFIG_VTUNE 00:11:22.628 #define SPDK_CONFIG_VTUNE_DIR 00:11:22.628 #define SPDK_CONFIG_WERROR 1 00:11:22.628 #define SPDK_CONFIG_WPDK_DIR 00:11:22.628 #undef SPDK_CONFIG_XNVME 00:11:22.628 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.628 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:22.628 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : main 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:22.629 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:22.630 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:22.631 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:22.631 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:22.631 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:22.631 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:22.631 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:22.631 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2996053 ]] 00:11:22.631 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2996053 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.j10zjq 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.j10zjq/tests/target /tmp/spdk.j10zjq 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=678309888 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4606119936 00:11:22.893 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=121091424256 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356562432 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8265138176 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666914816 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847898112 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871314944 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23416832 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677797888 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=483328 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935643136 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935655424 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:22.894 * Looking for test storage... 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=121091424256 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10479730688 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.894 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.895 --rc genhtml_branch_coverage=1 00:11:22.895 --rc genhtml_function_coverage=1 00:11:22.895 --rc genhtml_legend=1 00:11:22.895 --rc geninfo_all_blocks=1 00:11:22.895 --rc geninfo_unexecuted_blocks=1 00:11:22.895 00:11:22.895 ' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.895 --rc genhtml_branch_coverage=1 00:11:22.895 --rc genhtml_function_coverage=1 00:11:22.895 --rc genhtml_legend=1 00:11:22.895 --rc geninfo_all_blocks=1 00:11:22.895 --rc geninfo_unexecuted_blocks=1 00:11:22.895 00:11:22.895 ' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.895 --rc genhtml_branch_coverage=1 00:11:22.895 --rc genhtml_function_coverage=1 00:11:22.895 --rc genhtml_legend=1 00:11:22.895 --rc geninfo_all_blocks=1 00:11:22.895 --rc geninfo_unexecuted_blocks=1 00:11:22.895 00:11:22.895 ' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.895 --rc genhtml_branch_coverage=1 00:11:22.895 --rc genhtml_function_coverage=1 00:11:22.895 --rc genhtml_legend=1 00:11:22.895 --rc geninfo_all_blocks=1 00:11:22.895 --rc geninfo_unexecuted_blocks=1 00:11:22.895 00:11:22.895 ' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.895 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:31.031 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:31.031 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:31.031 Found net devices under 0000:31:00.0: cvl_0_0 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:31.031 Found net devices under 0000:31:00.1: cvl_0_1 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.031 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:11:31.031 00:11:31.031 --- 10.0.0.2 ping statistics --- 00:11:31.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.031 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:11:31.031 00:11:31.031 --- 10.0.0.1 ping statistics --- 00:11:31.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.031 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.031 ************************************ 00:11:31.031 START TEST nvmf_filesystem_no_in_capsule 00:11:31.031 ************************************ 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3000068 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3000068 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3000068 ']' 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.031 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.031 [2024-10-01 15:29:10.202721] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:11:31.031 [2024-10-01 15:29:10.202802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.031 [2024-10-01 15:29:10.244942] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:31.031 [2024-10-01 15:29:10.295782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.032 [2024-10-01 15:29:10.343851] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.032 [2024-10-01 15:29:10.343914] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.032 [2024-10-01 15:29:10.343923] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.032 [2024-10-01 15:29:10.343931] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.032 [2024-10-01 15:29:10.343937] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.032 [2024-10-01 15:29:10.344031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.032 [2024-10-01 15:29:10.344163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.032 [2024-10-01 15:29:10.344283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.032 [2024-10-01 15:29:10.344285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.602 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.602 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:31.602 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:31.602 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:31.602 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.861 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.861 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:31.861 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.862 [2024-10-01 15:29:11.067804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.862 Malloc1 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.862 [2024-10-01 15:29:11.226537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:31.862 { 00:11:31.862 "name": "Malloc1", 00:11:31.862 "aliases": [ 00:11:31.862 "1188a3a6-1e84-4226-aa7f-12966eb1e0bf" 00:11:31.862 ], 00:11:31.862 "product_name": "Malloc disk", 00:11:31.862 "block_size": 512, 00:11:31.862 "num_blocks": 1048576, 00:11:31.862 "uuid": "1188a3a6-1e84-4226-aa7f-12966eb1e0bf", 00:11:31.862 "assigned_rate_limits": { 00:11:31.862 "rw_ios_per_sec": 0, 00:11:31.862 "rw_mbytes_per_sec": 0, 00:11:31.862 "r_mbytes_per_sec": 0, 00:11:31.862 "w_mbytes_per_sec": 0 00:11:31.862 }, 00:11:31.862 "claimed": true, 00:11:31.862 "claim_type": "exclusive_write", 00:11:31.862 "zoned": false, 00:11:31.862 "supported_io_types": { 00:11:31.862 "read": true, 00:11:31.862 "write": true, 00:11:31.862 "unmap": true, 00:11:31.862 "flush": true, 00:11:31.862 "reset": true, 00:11:31.862 "nvme_admin": false, 00:11:31.862 "nvme_io": false, 00:11:31.862 "nvme_io_md": false, 00:11:31.862 "write_zeroes": true, 00:11:31.862 "zcopy": true, 00:11:31.862 "get_zone_info": false, 00:11:31.862 "zone_management": false, 00:11:31.862 "zone_append": false, 00:11:31.862 "compare": false, 00:11:31.862 "compare_and_write": false, 00:11:31.862 "abort": true, 00:11:31.862 "seek_hole": false, 00:11:31.862 "seek_data": false, 00:11:31.862 "copy": true, 00:11:31.862 "nvme_iov_md": false 00:11:31.862 }, 00:11:31.862 "memory_domains": [ 00:11:31.862 { 00:11:31.862 "dma_device_id": "system", 00:11:31.862 "dma_device_type": 1 00:11:31.862 }, 00:11:31.862 { 00:11:31.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.862 "dma_device_type": 2 00:11:31.862 } 00:11:31.862 ], 00:11:31.862 "driver_specific": {} 00:11:31.862 } 00:11:31.862 ]' 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:31.862 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:32.122 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:32.122 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:32.122 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:32.122 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:32.122 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.502 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.502 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.502 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.502 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:33.502 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:35.406 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:35.406 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:35.406 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:35.666 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:35.667 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:35.667 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:35.667 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:35.667 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:35.667 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:36.235 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:37.173 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:37.173 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:37.173 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:37.173 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.173 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.433 ************************************ 00:11:37.433 START TEST filesystem_ext4 00:11:37.433 ************************************ 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:37.433 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:37.433 mke2fs 1.47.0 (5-Feb-2023) 00:11:37.433 Discarding device blocks: 0/522240 done 00:11:37.433 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:37.433 Filesystem UUID: 546e3ab6-8b0b-4614-b88d-5fb38ba6abcf 00:11:37.433 Superblock backups stored on blocks: 00:11:37.433 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:37.433 00:11:37.433 Allocating group tables: 0/64 done 00:11:37.433 Writing inode tables: 0/64 done 00:11:37.693 Creating journal (8192 blocks): done 00:11:38.632 Writing superblocks and filesystem accounting information: 0/64 done 00:11:38.632 00:11:38.632 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:38.632 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3000068 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.207 00:11:45.207 real 0m7.165s 00:11:45.207 user 0m0.022s 00:11:45.207 sys 0m0.084s 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:45.207 ************************************ 00:11:45.207 END TEST filesystem_ext4 00:11:45.207 ************************************ 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.207 ************************************ 00:11:45.207 START TEST filesystem_btrfs 00:11:45.207 ************************************ 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:45.207 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:45.207 btrfs-progs v6.8.1 00:11:45.207 See https://btrfs.readthedocs.io for more information. 00:11:45.207 00:11:45.207 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:45.207 NOTE: several default settings have changed in version 5.15, please make sure 00:11:45.207 this does not affect your deployments: 00:11:45.207 - DUP for metadata (-m dup) 00:11:45.207 - enabled no-holes (-O no-holes) 00:11:45.207 - enabled free-space-tree (-R free-space-tree) 00:11:45.207 00:11:45.207 Label: (null) 00:11:45.207 UUID: 6ae1c10e-7d5c-43b3-9f8f-c0cfe4756873 00:11:45.207 Node size: 16384 00:11:45.207 Sector size: 4096 (CPU page size: 4096) 00:11:45.207 Filesystem size: 510.00MiB 00:11:45.207 Block group profiles: 00:11:45.207 Data: single 8.00MiB 00:11:45.207 Metadata: DUP 32.00MiB 00:11:45.207 System: DUP 8.00MiB 00:11:45.207 SSD detected: yes 00:11:45.207 Zoned device: no 00:11:45.207 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:45.207 Checksum: crc32c 00:11:45.207 Number of devices: 1 00:11:45.207 Devices: 00:11:45.207 ID SIZE PATH 00:11:45.207 1 510.00MiB /dev/nvme0n1p1 00:11:45.207 00:11:45.207 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:45.207 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.775 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.775 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:45.775 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.775 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:46.034 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:46.034 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.034 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3000068 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.035 00:11:46.035 real 0m1.406s 00:11:46.035 user 0m0.034s 00:11:46.035 sys 0m0.120s 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.035 ************************************ 00:11:46.035 END TEST filesystem_btrfs 00:11:46.035 ************************************ 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.035 ************************************ 00:11:46.035 START TEST filesystem_xfs 00:11:46.035 ************************************ 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:46.035 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:46.035 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:46.035 = sectsz=512 attr=2, projid32bit=1 00:11:46.035 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:46.035 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:46.035 data = bsize=4096 blocks=130560, imaxpct=25 00:11:46.035 = sunit=0 swidth=0 blks 00:11:46.035 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:46.035 log =internal log bsize=4096 blocks=16384, version=2 00:11:46.035 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:46.035 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:47.413 Discarding blocks...Done. 00:11:47.413 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:47.413 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3000068 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.319 00:11:49.319 real 0m3.232s 00:11:49.319 user 0m0.021s 00:11:49.319 sys 0m0.084s 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.319 ************************************ 00:11:49.319 END TEST filesystem_xfs 00:11:49.319 ************************************ 00:11:49.319 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:49.578 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:49.578 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:49.838 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3000068 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3000068 ']' 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3000068 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3000068 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3000068' 00:11:49.839 killing process with pid 3000068 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3000068 00:11:49.839 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3000068 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:50.098 00:11:50.098 real 0m19.253s 00:11:50.098 user 1m16.024s 00:11:50.098 sys 0m1.479s 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.098 ************************************ 00:11:50.098 END TEST nvmf_filesystem_no_in_capsule 00:11:50.098 ************************************ 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.098 ************************************ 00:11:50.098 START TEST nvmf_filesystem_in_capsule 00:11:50.098 ************************************ 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3003994 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3003994 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3003994 ']' 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.098 [2024-10-01 15:29:29.522934] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:11:50.098 [2024-10-01 15:29:29.522982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.359 [2024-10-01 15:29:29.560105] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:50.359 [2024-10-01 15:29:29.607648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.359 [2024-10-01 15:29:29.637697] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.359 [2024-10-01 15:29:29.637733] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.359 [2024-10-01 15:29:29.637739] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.359 [2024-10-01 15:29:29.637744] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.359 [2024-10-01 15:29:29.637748] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.359 [2024-10-01 15:29:29.637887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.359 [2024-10-01 15:29:29.638038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.359 [2024-10-01 15:29:29.638241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.359 [2024-10-01 15:29:29.638242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.929 [2024-10-01 15:29:30.369971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:50.929 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.930 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.190 Malloc1 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.190 [2024-10-01 15:29:30.495727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:51.190 { 00:11:51.190 "name": "Malloc1", 00:11:51.190 "aliases": [ 00:11:51.190 "6c1faa8f-75a7-4a12-9dc6-d593e24a2501" 00:11:51.190 ], 00:11:51.190 "product_name": "Malloc disk", 00:11:51.190 "block_size": 512, 00:11:51.190 "num_blocks": 1048576, 00:11:51.190 "uuid": "6c1faa8f-75a7-4a12-9dc6-d593e24a2501", 00:11:51.190 "assigned_rate_limits": { 00:11:51.190 "rw_ios_per_sec": 0, 00:11:51.190 "rw_mbytes_per_sec": 0, 00:11:51.190 "r_mbytes_per_sec": 0, 00:11:51.190 "w_mbytes_per_sec": 0 00:11:51.190 }, 00:11:51.190 "claimed": true, 00:11:51.190 "claim_type": "exclusive_write", 00:11:51.190 "zoned": false, 00:11:51.190 "supported_io_types": { 00:11:51.190 "read": true, 00:11:51.190 "write": true, 00:11:51.190 "unmap": true, 00:11:51.190 "flush": true, 00:11:51.190 "reset": true, 00:11:51.190 "nvme_admin": false, 00:11:51.190 "nvme_io": false, 00:11:51.190 "nvme_io_md": false, 00:11:51.190 "write_zeroes": true, 00:11:51.190 "zcopy": true, 00:11:51.190 "get_zone_info": false, 00:11:51.190 "zone_management": false, 00:11:51.190 "zone_append": false, 00:11:51.190 "compare": false, 00:11:51.190 "compare_and_write": false, 00:11:51.190 "abort": true, 00:11:51.190 "seek_hole": false, 00:11:51.190 "seek_data": false, 00:11:51.190 "copy": true, 00:11:51.190 "nvme_iov_md": false 00:11:51.190 }, 00:11:51.190 "memory_domains": [ 00:11:51.190 { 00:11:51.190 "dma_device_id": "system", 00:11:51.190 "dma_device_type": 1 00:11:51.190 }, 00:11:51.190 { 00:11:51.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.190 "dma_device_type": 2 00:11:51.190 } 00:11:51.190 ], 00:11:51.190 "driver_specific": {} 00:11:51.190 } 00:11:51.190 ]' 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:51.190 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:51.191 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:51.191 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:51.191 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:51.191 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.098 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.098 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.098 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.098 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:53.098 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:55.007 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:55.267 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:55.526 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.907 ************************************ 00:11:56.907 START TEST filesystem_in_capsule_ext4 00:11:56.907 ************************************ 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:56.907 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:56.907 mke2fs 1.47.0 (5-Feb-2023) 00:11:56.907 Discarding device blocks: 0/522240 done 00:11:56.907 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:56.907 Filesystem UUID: c1bf6622-1cfd-48af-ad3c-2f7421ad3677 00:11:56.907 Superblock backups stored on blocks: 00:11:56.907 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:56.907 00:11:56.907 Allocating group tables: 0/64 done 00:11:56.907 Writing inode tables: 0/64 done 00:11:56.907 Creating journal (8192 blocks): done 00:11:59.114 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:59.114 00:11:59.114 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:59.114 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3003994 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.686 00:12:05.686 real 0m8.496s 00:12:05.686 user 0m0.028s 00:12:05.686 sys 0m0.082s 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:05.686 ************************************ 00:12:05.686 END TEST filesystem_in_capsule_ext4 00:12:05.686 ************************************ 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.686 ************************************ 00:12:05.686 START TEST filesystem_in_capsule_btrfs 00:12:05.686 ************************************ 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:05.686 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:05.687 btrfs-progs v6.8.1 00:12:05.687 See https://btrfs.readthedocs.io for more information. 00:12:05.687 00:12:05.687 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:05.687 NOTE: several default settings have changed in version 5.15, please make sure 00:12:05.687 this does not affect your deployments: 00:12:05.687 - DUP for metadata (-m dup) 00:12:05.687 - enabled no-holes (-O no-holes) 00:12:05.687 - enabled free-space-tree (-R free-space-tree) 00:12:05.687 00:12:05.687 Label: (null) 00:12:05.687 UUID: 40496f45-be2f-4222-a7fa-94d5ea7e4d01 00:12:05.687 Node size: 16384 00:12:05.687 Sector size: 4096 (CPU page size: 4096) 00:12:05.687 Filesystem size: 510.00MiB 00:12:05.687 Block group profiles: 00:12:05.687 Data: single 8.00MiB 00:12:05.687 Metadata: DUP 32.00MiB 00:12:05.687 System: DUP 8.00MiB 00:12:05.687 SSD detected: yes 00:12:05.687 Zoned device: no 00:12:05.687 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:05.687 Checksum: crc32c 00:12:05.687 Number of devices: 1 00:12:05.687 Devices: 00:12:05.687 ID SIZE PATH 00:12:05.687 1 510.00MiB /dev/nvme0n1p1 00:12:05.687 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:05.687 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.946 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.946 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:05.946 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.946 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:05.946 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:05.946 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3003994 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.206 00:12:06.206 real 0m0.880s 00:12:06.206 user 0m0.033s 00:12:06.206 sys 0m0.119s 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.206 ************************************ 00:12:06.206 END TEST filesystem_in_capsule_btrfs 00:12:06.206 ************************************ 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.206 ************************************ 00:12:06.206 START TEST filesystem_in_capsule_xfs 00:12:06.206 ************************************ 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:06.206 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:06.207 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:06.207 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:06.207 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:06.207 = sectsz=512 attr=2, projid32bit=1 00:12:06.207 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:06.207 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:06.207 data = bsize=4096 blocks=130560, imaxpct=25 00:12:06.207 = sunit=0 swidth=0 blks 00:12:06.207 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:06.207 log =internal log bsize=4096 blocks=16384, version=2 00:12:06.207 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:06.207 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:07.146 Discarding blocks...Done. 00:12:07.146 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:07.146 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3003994 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.695 00:12:09.695 real 0m3.231s 00:12:09.695 user 0m0.029s 00:12:09.695 sys 0m0.079s 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.695 ************************************ 00:12:09.695 END TEST filesystem_in_capsule_xfs 00:12:09.695 ************************************ 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:09.695 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3003994 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3003994 ']' 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3003994 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3003994 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.695 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.696 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3003994' 00:12:09.696 killing process with pid 3003994 00:12:09.696 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3003994 00:12:09.696 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3003994 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:09.959 00:12:09.959 real 0m19.888s 00:12:09.959 user 1m18.734s 00:12:09.959 sys 0m1.421s 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.959 ************************************ 00:12:09.959 END TEST nvmf_filesystem_in_capsule 00:12:09.959 ************************************ 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.959 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.959 rmmod nvme_tcp 00:12:10.220 rmmod nvme_fabrics 00:12:10.220 rmmod nvme_keyring 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.220 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.132 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.132 00:12:12.132 real 0m49.801s 00:12:12.132 user 2m37.207s 00:12:12.132 sys 0m9.036s 00:12:12.132 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.132 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.132 ************************************ 00:12:12.132 END TEST nvmf_filesystem 00:12:12.132 ************************************ 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.392 ************************************ 00:12:12.392 START TEST nvmf_target_discovery 00:12:12.392 ************************************ 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.392 * Looking for test storage... 00:12:12.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:12.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.392 --rc genhtml_branch_coverage=1 00:12:12.392 --rc genhtml_function_coverage=1 00:12:12.392 --rc genhtml_legend=1 00:12:12.392 --rc geninfo_all_blocks=1 00:12:12.392 --rc geninfo_unexecuted_blocks=1 00:12:12.392 00:12:12.392 ' 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:12.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.392 --rc genhtml_branch_coverage=1 00:12:12.392 --rc genhtml_function_coverage=1 00:12:12.392 --rc genhtml_legend=1 00:12:12.392 --rc geninfo_all_blocks=1 00:12:12.392 --rc geninfo_unexecuted_blocks=1 00:12:12.392 00:12:12.392 ' 00:12:12.392 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:12.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.392 --rc genhtml_branch_coverage=1 00:12:12.392 --rc genhtml_function_coverage=1 00:12:12.392 --rc genhtml_legend=1 00:12:12.392 --rc geninfo_all_blocks=1 00:12:12.392 --rc geninfo_unexecuted_blocks=1 00:12:12.393 00:12:12.393 ' 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.393 --rc genhtml_branch_coverage=1 00:12:12.393 --rc genhtml_function_coverage=1 00:12:12.393 --rc genhtml_legend=1 00:12:12.393 --rc geninfo_all_blocks=1 00:12:12.393 --rc geninfo_unexecuted_blocks=1 00:12:12.393 00:12:12.393 ' 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.393 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.653 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.654 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:20.794 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:20.794 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.794 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:20.795 Found net devices under 0000:31:00.0: cvl_0_0 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:20.795 Found net devices under 0000:31:00.1: cvl_0_1 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:12:20.795 00:12:20.795 --- 10.0.0.2 ping statistics --- 00:12:20.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.795 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:12:20.795 00:12:20.795 --- 10.0.0.1 ping statistics --- 00:12:20.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.795 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=3012317 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 3012317 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3012317 ']' 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.795 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.795 [2024-10-01 15:29:59.667946] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:20.795 [2024-10-01 15:29:59.668009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.796 [2024-10-01 15:29:59.713215] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:20.796 [2024-10-01 15:29:59.738309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.796 [2024-10-01 15:29:59.783767] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.796 [2024-10-01 15:29:59.783818] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.796 [2024-10-01 15:29:59.783825] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.796 [2024-10-01 15:29:59.783830] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.796 [2024-10-01 15:29:59.783835] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.796 [2024-10-01 15:29:59.784005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.796 [2024-10-01 15:29:59.784379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.796 [2024-10-01 15:29:59.784576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.796 [2024-10-01 15:29:59.784577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 [2024-10-01 15:29:59.929578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 Null1 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 [2024-10-01 15:29:59.990141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 Null2 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 Null3 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 Null4 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:21.059 00:12:21.059 Discovery Log Number of Records 6, Generation counter 6 00:12:21.059 =====Discovery Log Entry 0====== 00:12:21.059 trtype: tcp 00:12:21.059 adrfam: ipv4 00:12:21.059 subtype: current discovery subsystem 00:12:21.059 treq: not required 00:12:21.059 portid: 0 00:12:21.059 trsvcid: 4420 00:12:21.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:21.059 traddr: 10.0.0.2 00:12:21.059 eflags: explicit discovery connections, duplicate discovery information 00:12:21.059 sectype: none 00:12:21.059 =====Discovery Log Entry 1====== 00:12:21.059 trtype: tcp 00:12:21.059 adrfam: ipv4 00:12:21.059 subtype: nvme subsystem 00:12:21.059 treq: not required 00:12:21.059 portid: 0 00:12:21.059 trsvcid: 4420 00:12:21.059 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:21.059 traddr: 10.0.0.2 00:12:21.059 eflags: none 00:12:21.059 sectype: none 00:12:21.059 =====Discovery Log Entry 2====== 00:12:21.059 trtype: tcp 00:12:21.059 adrfam: ipv4 00:12:21.059 subtype: nvme subsystem 00:12:21.059 treq: not required 00:12:21.059 portid: 0 00:12:21.059 trsvcid: 4420 00:12:21.059 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:21.059 traddr: 10.0.0.2 00:12:21.059 eflags: none 00:12:21.059 sectype: none 00:12:21.059 =====Discovery Log Entry 3====== 00:12:21.059 trtype: tcp 00:12:21.059 adrfam: ipv4 00:12:21.059 subtype: nvme subsystem 00:12:21.059 treq: not required 00:12:21.059 portid: 0 00:12:21.059 trsvcid: 4420 00:12:21.059 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:21.059 traddr: 10.0.0.2 00:12:21.059 eflags: none 00:12:21.059 sectype: none 00:12:21.059 =====Discovery Log Entry 4====== 00:12:21.059 trtype: tcp 00:12:21.059 adrfam: ipv4 00:12:21.059 subtype: nvme subsystem 00:12:21.059 treq: not required 00:12:21.059 portid: 0 00:12:21.059 trsvcid: 4420 00:12:21.059 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:21.059 traddr: 10.0.0.2 00:12:21.059 eflags: none 00:12:21.059 sectype: none 00:12:21.059 =====Discovery Log Entry 5====== 00:12:21.059 trtype: tcp 00:12:21.059 adrfam: ipv4 00:12:21.059 subtype: discovery subsystem referral 00:12:21.059 treq: not required 00:12:21.059 portid: 0 00:12:21.059 trsvcid: 4430 00:12:21.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:21.059 traddr: 10.0.0.2 00:12:21.059 eflags: none 00:12:21.059 sectype: none 00:12:21.059 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:21.059 Perform nvmf subsystem discovery via RPC 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 [ 00:12:21.060 { 00:12:21.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:21.060 "subtype": "Discovery", 00:12:21.060 "listen_addresses": [ 00:12:21.060 { 00:12:21.060 "trtype": "TCP", 00:12:21.060 "adrfam": "IPv4", 00:12:21.060 "traddr": "10.0.0.2", 00:12:21.060 "trsvcid": "4420" 00:12:21.060 } 00:12:21.060 ], 00:12:21.060 "allow_any_host": true, 00:12:21.060 "hosts": [] 00:12:21.060 }, 00:12:21.060 { 00:12:21.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:21.060 "subtype": "NVMe", 00:12:21.060 "listen_addresses": [ 00:12:21.060 { 00:12:21.060 "trtype": "TCP", 00:12:21.060 "adrfam": "IPv4", 00:12:21.060 "traddr": "10.0.0.2", 00:12:21.060 "trsvcid": "4420" 00:12:21.060 } 00:12:21.060 ], 00:12:21.060 "allow_any_host": true, 00:12:21.060 "hosts": [], 00:12:21.060 "serial_number": "SPDK00000000000001", 00:12:21.060 "model_number": "SPDK bdev Controller", 00:12:21.060 "max_namespaces": 32, 00:12:21.060 "min_cntlid": 1, 00:12:21.060 "max_cntlid": 65519, 00:12:21.060 "namespaces": [ 00:12:21.060 { 00:12:21.060 "nsid": 1, 00:12:21.060 "bdev_name": "Null1", 00:12:21.060 "name": "Null1", 00:12:21.060 "nguid": "21563AF9C3994EDF883AC25E272E2636", 00:12:21.060 "uuid": "21563af9-c399-4edf-883a-c25e272e2636" 00:12:21.060 } 00:12:21.060 ] 00:12:21.060 }, 00:12:21.060 { 00:12:21.060 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:21.060 "subtype": "NVMe", 00:12:21.060 "listen_addresses": [ 00:12:21.060 { 00:12:21.060 "trtype": "TCP", 00:12:21.060 "adrfam": "IPv4", 00:12:21.060 "traddr": "10.0.0.2", 00:12:21.060 "trsvcid": "4420" 00:12:21.060 } 00:12:21.060 ], 00:12:21.060 "allow_any_host": true, 00:12:21.060 "hosts": [], 00:12:21.060 "serial_number": "SPDK00000000000002", 00:12:21.060 "model_number": "SPDK bdev Controller", 00:12:21.060 "max_namespaces": 32, 00:12:21.060 "min_cntlid": 1, 00:12:21.060 "max_cntlid": 65519, 00:12:21.060 "namespaces": [ 00:12:21.060 { 00:12:21.060 "nsid": 1, 00:12:21.060 "bdev_name": "Null2", 00:12:21.060 "name": "Null2", 00:12:21.060 "nguid": "6B64B704FF2343029947D8CEF05025EB", 00:12:21.060 "uuid": "6b64b704-ff23-4302-9947-d8cef05025eb" 00:12:21.060 } 00:12:21.060 ] 00:12:21.060 }, 00:12:21.060 { 00:12:21.060 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:21.060 "subtype": "NVMe", 00:12:21.060 "listen_addresses": [ 00:12:21.060 { 00:12:21.060 "trtype": "TCP", 00:12:21.060 "adrfam": "IPv4", 00:12:21.060 "traddr": "10.0.0.2", 00:12:21.060 "trsvcid": "4420" 00:12:21.060 } 00:12:21.060 ], 00:12:21.060 "allow_any_host": true, 00:12:21.060 "hosts": [], 00:12:21.060 "serial_number": "SPDK00000000000003", 00:12:21.060 "model_number": "SPDK bdev Controller", 00:12:21.060 "max_namespaces": 32, 00:12:21.060 "min_cntlid": 1, 00:12:21.060 "max_cntlid": 65519, 00:12:21.060 "namespaces": [ 00:12:21.060 { 00:12:21.060 "nsid": 1, 00:12:21.060 "bdev_name": "Null3", 00:12:21.060 "name": "Null3", 00:12:21.060 "nguid": "FDFCD99EB3644A32AFA089F02B8CF98F", 00:12:21.060 "uuid": "fdfcd99e-b364-4a32-afa0-89f02b8cf98f" 00:12:21.060 } 00:12:21.060 ] 00:12:21.060 }, 00:12:21.060 { 00:12:21.060 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:21.060 "subtype": "NVMe", 00:12:21.060 "listen_addresses": [ 00:12:21.060 { 00:12:21.060 "trtype": "TCP", 00:12:21.060 "adrfam": "IPv4", 00:12:21.060 "traddr": "10.0.0.2", 00:12:21.060 "trsvcid": "4420" 00:12:21.060 } 00:12:21.060 ], 00:12:21.060 "allow_any_host": true, 00:12:21.060 "hosts": [], 00:12:21.060 "serial_number": "SPDK00000000000004", 00:12:21.060 "model_number": "SPDK bdev Controller", 00:12:21.060 "max_namespaces": 32, 00:12:21.060 "min_cntlid": 1, 00:12:21.060 "max_cntlid": 65519, 00:12:21.060 "namespaces": [ 00:12:21.060 { 00:12:21.060 "nsid": 1, 00:12:21.060 "bdev_name": "Null4", 00:12:21.060 "name": "Null4", 00:12:21.060 "nguid": "9FBDD43A141741439FA2C82EEC3DE6D0", 00:12:21.060 "uuid": "9fbdd43a-1417-4143-9fa2-c82eec3de6d0" 00:12:21.060 } 00:12:21.060 ] 00:12:21.060 } 00:12:21.060 ] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.060 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.323 rmmod nvme_tcp 00:12:21.323 rmmod nvme_fabrics 00:12:21.323 rmmod nvme_keyring 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 3012317 ']' 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 3012317 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3012317 ']' 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3012317 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3012317 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3012317' 00:12:21.323 killing process with pid 3012317 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3012317 00:12:21.323 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3012317 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.585 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.131 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.131 00:12:24.131 real 0m11.322s 00:12:24.131 user 0m6.693s 00:12:24.131 sys 0m6.156s 00:12:24.131 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.131 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.131 ************************************ 00:12:24.131 END TEST nvmf_target_discovery 00:12:24.131 ************************************ 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.131 ************************************ 00:12:24.131 START TEST nvmf_referrals 00:12:24.131 ************************************ 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:24.131 * Looking for test storage... 00:12:24.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.131 --rc genhtml_branch_coverage=1 00:12:24.131 --rc genhtml_function_coverage=1 00:12:24.131 --rc genhtml_legend=1 00:12:24.131 --rc geninfo_all_blocks=1 00:12:24.131 --rc geninfo_unexecuted_blocks=1 00:12:24.131 00:12:24.131 ' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.131 --rc genhtml_branch_coverage=1 00:12:24.131 --rc genhtml_function_coverage=1 00:12:24.131 --rc genhtml_legend=1 00:12:24.131 --rc geninfo_all_blocks=1 00:12:24.131 --rc geninfo_unexecuted_blocks=1 00:12:24.131 00:12:24.131 ' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.131 --rc genhtml_branch_coverage=1 00:12:24.131 --rc genhtml_function_coverage=1 00:12:24.131 --rc genhtml_legend=1 00:12:24.131 --rc geninfo_all_blocks=1 00:12:24.131 --rc geninfo_unexecuted_blocks=1 00:12:24.131 00:12:24.131 ' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.131 --rc genhtml_branch_coverage=1 00:12:24.131 --rc genhtml_function_coverage=1 00:12:24.131 --rc genhtml_legend=1 00:12:24.131 --rc geninfo_all_blocks=1 00:12:24.131 --rc geninfo_unexecuted_blocks=1 00:12:24.131 00:12:24.131 ' 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.131 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.132 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.274 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.274 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.274 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:32.275 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:32.275 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:32.275 Found net devices under 0000:31:00.0: cvl_0_0 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:32.275 Found net devices under 0000:31:00.1: cvl_0_1 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.275 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:12:32.275 00:12:32.275 --- 10.0.0.2 ping statistics --- 00:12:32.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.275 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:12:32.275 00:12:32.275 --- 10.0.0.1 ping statistics --- 00:12:32.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.275 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:32.275 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=3017537 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 3017537 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3017537 ']' 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.276 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.276 [2024-10-01 15:30:11.128985] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:32.276 [2024-10-01 15:30:11.129055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.276 [2024-10-01 15:30:11.170663] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:32.276 [2024-10-01 15:30:11.218162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.276 [2024-10-01 15:30:11.266001] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.276 [2024-10-01 15:30:11.266051] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.276 [2024-10-01 15:30:11.266063] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.276 [2024-10-01 15:30:11.266071] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.276 [2024-10-01 15:30:11.266077] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.276 [2024-10-01 15:30:11.266253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.276 [2024-10-01 15:30:11.266408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.276 [2024-10-01 15:30:11.266562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.276 [2024-10-01 15:30:11.266563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.536 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.536 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:32.536 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:32.536 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.536 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.797 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.797 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.797 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.797 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.797 [2024-10-01 15:30:12.008821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.797 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.797 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.798 [2024-10-01 15:30:12.025174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.798 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:33.060 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:33.322 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.583 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:33.583 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:33.583 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:33.583 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:33.583 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:33.583 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.583 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:33.843 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:33.843 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:33.843 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:33.843 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:33.843 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.843 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:34.103 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:34.103 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:34.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.364 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.624 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.624 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.624 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.624 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.884 rmmod nvme_tcp 00:12:34.884 rmmod nvme_fabrics 00:12:34.884 rmmod nvme_keyring 00:12:34.884 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 3017537 ']' 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 3017537 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3017537 ']' 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3017537 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.885 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017537 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017537' 00:12:35.144 killing process with pid 3017537 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3017537 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3017537 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.144 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.690 00:12:37.690 real 0m13.522s 00:12:37.690 user 0m16.055s 00:12:37.690 sys 0m6.722s 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.690 ************************************ 00:12:37.690 END TEST nvmf_referrals 00:12:37.690 ************************************ 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.690 ************************************ 00:12:37.690 START TEST nvmf_connect_disconnect 00:12:37.690 ************************************ 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:37.690 * Looking for test storage... 00:12:37.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.690 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.691 --rc genhtml_branch_coverage=1 00:12:37.691 --rc genhtml_function_coverage=1 00:12:37.691 --rc genhtml_legend=1 00:12:37.691 --rc geninfo_all_blocks=1 00:12:37.691 --rc geninfo_unexecuted_blocks=1 00:12:37.691 00:12:37.691 ' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.691 --rc genhtml_branch_coverage=1 00:12:37.691 --rc genhtml_function_coverage=1 00:12:37.691 --rc genhtml_legend=1 00:12:37.691 --rc geninfo_all_blocks=1 00:12:37.691 --rc geninfo_unexecuted_blocks=1 00:12:37.691 00:12:37.691 ' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.691 --rc genhtml_branch_coverage=1 00:12:37.691 --rc genhtml_function_coverage=1 00:12:37.691 --rc genhtml_legend=1 00:12:37.691 --rc geninfo_all_blocks=1 00:12:37.691 --rc geninfo_unexecuted_blocks=1 00:12:37.691 00:12:37.691 ' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.691 --rc genhtml_branch_coverage=1 00:12:37.691 --rc genhtml_function_coverage=1 00:12:37.691 --rc genhtml_legend=1 00:12:37.691 --rc geninfo_all_blocks=1 00:12:37.691 --rc geninfo_unexecuted_blocks=1 00:12:37.691 00:12:37.691 ' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.691 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.692 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.827 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:45.828 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:45.828 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:45.828 Found net devices under 0000:31:00.0: cvl_0_0 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:45.828 Found net devices under 0000:31:00.1: cvl_0_1 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:12:45.828 00:12:45.828 --- 10.0.0.2 ping statistics --- 00:12:45.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.828 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:12:45.828 00:12:45.828 --- 10.0.0.1 ping statistics --- 00:12:45.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.828 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:45.828 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=3022465 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 3022465 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3022465 ']' 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 [2024-10-01 15:30:24.715395] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:45.829 [2024-10-01 15:30:24.715461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.829 [2024-10-01 15:30:24.763030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:45.829 [2024-10-01 15:30:24.788522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.829 [2024-10-01 15:30:24.834029] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.829 [2024-10-01 15:30:24.834079] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.829 [2024-10-01 15:30:24.834085] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.829 [2024-10-01 15:30:24.834090] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.829 [2024-10-01 15:30:24.834095] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.829 [2024-10-01 15:30:24.834159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.829 [2024-10-01 15:30:24.834288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.829 [2024-10-01 15:30:24.834443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.829 [2024-10-01 15:30:24.834445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 [2024-10-01 15:30:24.990666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.829 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 [2024-10-01 15:30:25.060411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:45.829 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:48.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.863 [2024-10-01 15:30:57.781773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9920 is same with the state(6) to be set 00:13:18.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.782 [2024-10-01 15:32:18.175728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9920 is same with the state(6) to be set 00:14:38.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.916 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:40.916 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:40.916 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:40.916 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:40.916 rmmod nvme_tcp 00:16:40.916 rmmod nvme_fabrics 00:16:40.916 rmmod nvme_keyring 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 3022465 ']' 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 3022465 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3022465 ']' 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3022465 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3022465 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3022465' 00:16:40.916 killing process with pid 3022465 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3022465 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3022465 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.916 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.917 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.459 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:43.459 00:16:43.460 real 4m5.699s 00:16:43.460 user 15m34.089s 00:16:43.460 sys 0m25.942s 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:43.460 ************************************ 00:16:43.460 END TEST nvmf_connect_disconnect 00:16:43.460 ************************************ 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.460 ************************************ 00:16:43.460 START TEST nvmf_multitarget 00:16:43.460 ************************************ 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:43.460 * Looking for test storage... 00:16:43.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.460 --rc genhtml_branch_coverage=1 00:16:43.460 --rc genhtml_function_coverage=1 00:16:43.460 --rc genhtml_legend=1 00:16:43.460 --rc geninfo_all_blocks=1 00:16:43.460 --rc geninfo_unexecuted_blocks=1 00:16:43.460 00:16:43.460 ' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.460 --rc genhtml_branch_coverage=1 00:16:43.460 --rc genhtml_function_coverage=1 00:16:43.460 --rc genhtml_legend=1 00:16:43.460 --rc geninfo_all_blocks=1 00:16:43.460 --rc geninfo_unexecuted_blocks=1 00:16:43.460 00:16:43.460 ' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.460 --rc genhtml_branch_coverage=1 00:16:43.460 --rc genhtml_function_coverage=1 00:16:43.460 --rc genhtml_legend=1 00:16:43.460 --rc geninfo_all_blocks=1 00:16:43.460 --rc geninfo_unexecuted_blocks=1 00:16:43.460 00:16:43.460 ' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.460 --rc genhtml_branch_coverage=1 00:16:43.460 --rc genhtml_function_coverage=1 00:16:43.460 --rc genhtml_legend=1 00:16:43.460 --rc geninfo_all_blocks=1 00:16:43.460 --rc geninfo_unexecuted_blocks=1 00:16:43.460 00:16:43.460 ' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.460 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:43.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:43.461 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.604 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:51.605 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:51.605 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:51.605 Found net devices under 0000:31:00.0: cvl_0_0 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:51.605 Found net devices under 0000:31:00.1: cvl_0_1 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:51.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:16:51.605 00:16:51.605 --- 10.0.0.2 ping statistics --- 00:16:51.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.605 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:16:51.605 00:16:51.605 --- 10.0.0.1 ping statistics --- 00:16:51.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.605 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=3074145 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 3074145 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3074145 ']' 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.605 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.606 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.606 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:51.606 [2024-10-01 15:34:30.466987] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:51.606 [2024-10-01 15:34:30.467048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.606 [2024-10-01 15:34:30.509659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:51.606 [2024-10-01 15:34:30.560146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.606 [2024-10-01 15:34:30.607541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.606 [2024-10-01 15:34:30.607596] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.606 [2024-10-01 15:34:30.607604] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.606 [2024-10-01 15:34:30.607612] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.606 [2024-10-01 15:34:30.607618] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.606 [2024-10-01 15:34:30.607771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.606 [2024-10-01 15:34:30.607946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.606 [2024-10-01 15:34:30.608026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.606 [2024-10-01 15:34:30.608027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.867 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.867 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:51.867 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:51.867 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:51.867 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:52.128 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.128 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:52.128 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:52.128 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:52.128 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:52.128 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:52.128 "nvmf_tgt_1" 00:16:52.128 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:52.389 "nvmf_tgt_2" 00:16:52.389 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:52.389 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:52.389 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:52.389 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:52.649 true 00:16:52.649 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:52.649 true 00:16:52.649 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:52.649 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:52.910 rmmod nvme_tcp 00:16:52.910 rmmod nvme_fabrics 00:16:52.910 rmmod nvme_keyring 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 3074145 ']' 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 3074145 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3074145 ']' 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3074145 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3074145 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3074145' 00:16:52.910 killing process with pid 3074145 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3074145 00:16:52.910 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3074145 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.171 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.085 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:55.085 00:16:55.085 real 0m12.098s 00:16:55.085 user 0m10.314s 00:16:55.085 sys 0m6.394s 00:16:55.085 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.085 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:55.085 ************************************ 00:16:55.085 END TEST nvmf_multitarget 00:16:55.085 ************************************ 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.346 ************************************ 00:16:55.346 START TEST nvmf_rpc 00:16:55.346 ************************************ 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:55.346 * Looking for test storage... 00:16:55.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:55.346 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:55.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.608 --rc genhtml_branch_coverage=1 00:16:55.608 --rc genhtml_function_coverage=1 00:16:55.608 --rc genhtml_legend=1 00:16:55.608 --rc geninfo_all_blocks=1 00:16:55.608 --rc geninfo_unexecuted_blocks=1 00:16:55.608 00:16:55.608 ' 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:55.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.608 --rc genhtml_branch_coverage=1 00:16:55.608 --rc genhtml_function_coverage=1 00:16:55.608 --rc genhtml_legend=1 00:16:55.608 --rc geninfo_all_blocks=1 00:16:55.608 --rc geninfo_unexecuted_blocks=1 00:16:55.608 00:16:55.608 ' 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:55.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.608 --rc genhtml_branch_coverage=1 00:16:55.608 --rc genhtml_function_coverage=1 00:16:55.608 --rc genhtml_legend=1 00:16:55.608 --rc geninfo_all_blocks=1 00:16:55.608 --rc geninfo_unexecuted_blocks=1 00:16:55.608 00:16:55.608 ' 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:55.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.608 --rc genhtml_branch_coverage=1 00:16:55.608 --rc genhtml_function_coverage=1 00:16:55.608 --rc genhtml_legend=1 00:16:55.608 --rc geninfo_all_blocks=1 00:16:55.608 --rc geninfo_unexecuted_blocks=1 00:16:55.608 00:16:55.608 ' 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.608 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:55.609 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:03.789 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:03.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:03.789 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:03.790 Found net devices under 0000:31:00.0: cvl_0_0 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:03.790 Found net devices under 0000:31:00.1: cvl_0_1 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:17:03.790 00:17:03.790 --- 10.0.0.2 ping statistics --- 00:17:03.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.790 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:17:03.790 00:17:03.790 --- 10.0.0.1 ping statistics --- 00:17:03.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.790 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=3078907 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 3078907 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3078907 ']' 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.790 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.790 [2024-10-01 15:34:42.640960] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:03.790 [2024-10-01 15:34:42.641028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.790 [2024-10-01 15:34:42.682764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.790 [2024-10-01 15:34:42.730323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.790 [2024-10-01 15:34:42.777432] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.790 [2024-10-01 15:34:42.777482] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.790 [2024-10-01 15:34:42.777490] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.790 [2024-10-01 15:34:42.777497] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.790 [2024-10-01 15:34:42.777504] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.790 [2024-10-01 15:34:42.777659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.790 [2024-10-01 15:34:42.777815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.790 [2024-10-01 15:34:42.777957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.790 [2024-10-01 15:34:42.777957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.108 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:04.392 "tick_rate": 2400000000, 00:17:04.392 "poll_groups": [ 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_000", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [] 00:17:04.392 }, 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_001", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [] 00:17:04.392 }, 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_002", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [] 00:17:04.392 }, 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_003", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [] 00:17:04.392 } 00:17:04.392 ] 00:17:04.392 }' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 [2024-10-01 15:34:43.629162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:04.392 "tick_rate": 2400000000, 00:17:04.392 "poll_groups": [ 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_000", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [ 00:17:04.392 { 00:17:04.392 "trtype": "TCP" 00:17:04.392 } 00:17:04.392 ] 00:17:04.392 }, 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_001", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [ 00:17:04.392 { 00:17:04.392 "trtype": "TCP" 00:17:04.392 } 00:17:04.392 ] 00:17:04.392 }, 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_002", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [ 00:17:04.392 { 00:17:04.392 "trtype": "TCP" 00:17:04.392 } 00:17:04.392 ] 00:17:04.392 }, 00:17:04.392 { 00:17:04.392 "name": "nvmf_tgt_poll_group_003", 00:17:04.392 "admin_qpairs": 0, 00:17:04.392 "io_qpairs": 0, 00:17:04.392 "current_admin_qpairs": 0, 00:17:04.392 "current_io_qpairs": 0, 00:17:04.392 "pending_bdev_io": 0, 00:17:04.392 "completed_nvme_io": 0, 00:17:04.392 "transports": [ 00:17:04.392 { 00:17:04.392 "trtype": "TCP" 00:17:04.392 } 00:17:04.392 ] 00:17:04.392 } 00:17:04.392 ] 00:17:04.392 }' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 Malloc1 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.392 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 [2024-10-01 15:34:43.831270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:04.730 [2024-10-01 15:34:43.868201] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:04.730 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:04.730 could not add new controller: failed to write to nvme-fabrics device 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.730 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.345 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.345 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.345 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.345 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.345 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.257 [2024-10-01 15:34:47.632762] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:08.257 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:08.257 could not add new controller: failed to write to nvme-fabrics device 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.257 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.165 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:10.165 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:10.165 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.165 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:10.165 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.074 [2024-10-01 15:34:51.459408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.074 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.985 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.986 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:13.986 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.986 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:13.986 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.898 [2024-10-01 15:34:55.214462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.898 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.281 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.281 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:17.281 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.281 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:17.281 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.825 [2024-10-01 15:34:58.929359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.825 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.208 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.208 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:21.208 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.208 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:21.208 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:23.116 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:23.116 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:23.116 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.116 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:23.116 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.116 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:23.116 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.465 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.466 [2024-10-01 15:35:02.683954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.466 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.848 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:24.848 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:24.848 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.848 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:24.848 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:26.757 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:26.758 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:26.758 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.758 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:26.758 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.758 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:26.758 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 [2024-10-01 15:35:06.349920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.019 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.932 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.932 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:28.932 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.932 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:28.932 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:30.847 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:30.847 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:30.847 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.847 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:30.847 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.847 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:30.847 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.847 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 [2024-10-01 15:35:10.119507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 [2024-10-01 15:35:10.187668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 [2024-10-01 15:35:10.255858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.848 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 [2024-10-01 15:35:10.328061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 [2024-10-01 15:35:10.400285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:31.111 "tick_rate": 2400000000, 00:17:31.111 "poll_groups": [ 00:17:31.111 { 00:17:31.111 "name": "nvmf_tgt_poll_group_000", 00:17:31.111 "admin_qpairs": 0, 00:17:31.111 "io_qpairs": 224, 00:17:31.111 "current_admin_qpairs": 0, 00:17:31.111 "current_io_qpairs": 0, 00:17:31.111 "pending_bdev_io": 0, 00:17:31.111 "completed_nvme_io": 244, 00:17:31.111 "transports": [ 00:17:31.111 { 00:17:31.111 "trtype": "TCP" 00:17:31.111 } 00:17:31.111 ] 00:17:31.111 }, 00:17:31.111 { 00:17:31.111 "name": "nvmf_tgt_poll_group_001", 00:17:31.111 "admin_qpairs": 1, 00:17:31.111 "io_qpairs": 223, 00:17:31.111 "current_admin_qpairs": 0, 00:17:31.111 "current_io_qpairs": 0, 00:17:31.111 "pending_bdev_io": 0, 00:17:31.111 "completed_nvme_io": 272, 00:17:31.111 "transports": [ 00:17:31.111 { 00:17:31.111 "trtype": "TCP" 00:17:31.111 } 00:17:31.111 ] 00:17:31.111 }, 00:17:31.111 { 00:17:31.111 "name": "nvmf_tgt_poll_group_002", 00:17:31.111 "admin_qpairs": 6, 00:17:31.111 "io_qpairs": 218, 00:17:31.111 "current_admin_qpairs": 0, 00:17:31.111 "current_io_qpairs": 0, 00:17:31.111 "pending_bdev_io": 0, 00:17:31.111 "completed_nvme_io": 394, 00:17:31.111 "transports": [ 00:17:31.111 { 00:17:31.111 "trtype": "TCP" 00:17:31.111 } 00:17:31.111 ] 00:17:31.111 }, 00:17:31.111 { 00:17:31.111 "name": "nvmf_tgt_poll_group_003", 00:17:31.111 "admin_qpairs": 0, 00:17:31.111 "io_qpairs": 224, 00:17:31.111 "current_admin_qpairs": 0, 00:17:31.111 "current_io_qpairs": 0, 00:17:31.111 "pending_bdev_io": 0, 00:17:31.111 "completed_nvme_io": 329, 00:17:31.111 "transports": [ 00:17:31.111 { 00:17:31.111 "trtype": "TCP" 00:17:31.111 } 00:17:31.111 ] 00:17:31.111 } 00:17:31.111 ] 00:17:31.111 }' 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:31.111 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:31.112 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:31.112 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.112 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:31.112 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:31.112 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:31.112 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:31.112 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.373 rmmod nvme_tcp 00:17:31.373 rmmod nvme_fabrics 00:17:31.373 rmmod nvme_keyring 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 3078907 ']' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 3078907 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3078907 ']' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3078907 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3078907 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3078907' 00:17:31.373 killing process with pid 3078907 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3078907 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3078907 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:31.373 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:17:31.634 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:31.634 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:31.634 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.634 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.634 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:33.551 00:17:33.551 real 0m38.278s 00:17:33.551 user 1m54.190s 00:17:33.551 sys 0m7.993s 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.551 ************************************ 00:17:33.551 END TEST nvmf_rpc 00:17:33.551 ************************************ 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.551 ************************************ 00:17:33.551 START TEST nvmf_invalid 00:17:33.551 ************************************ 00:17:33.551 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:33.813 * Looking for test storage... 00:17:33.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.813 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:33.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.814 --rc genhtml_branch_coverage=1 00:17:33.814 --rc genhtml_function_coverage=1 00:17:33.814 --rc genhtml_legend=1 00:17:33.814 --rc geninfo_all_blocks=1 00:17:33.814 --rc geninfo_unexecuted_blocks=1 00:17:33.814 00:17:33.814 ' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:33.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.814 --rc genhtml_branch_coverage=1 00:17:33.814 --rc genhtml_function_coverage=1 00:17:33.814 --rc genhtml_legend=1 00:17:33.814 --rc geninfo_all_blocks=1 00:17:33.814 --rc geninfo_unexecuted_blocks=1 00:17:33.814 00:17:33.814 ' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:33.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.814 --rc genhtml_branch_coverage=1 00:17:33.814 --rc genhtml_function_coverage=1 00:17:33.814 --rc genhtml_legend=1 00:17:33.814 --rc geninfo_all_blocks=1 00:17:33.814 --rc geninfo_unexecuted_blocks=1 00:17:33.814 00:17:33.814 ' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:33.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.814 --rc genhtml_branch_coverage=1 00:17:33.814 --rc genhtml_function_coverage=1 00:17:33.814 --rc genhtml_legend=1 00:17:33.814 --rc geninfo_all_blocks=1 00:17:33.814 --rc geninfo_unexecuted_blocks=1 00:17:33.814 00:17:33.814 ' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:33.814 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.815 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:41.967 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:41.967 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:41.967 Found net devices under 0000:31:00.0: cvl_0_0 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:41.967 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:41.968 Found net devices under 0000:31:00.1: cvl_0_1 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:17:41.968 00:17:41.968 --- 10.0.0.2 ping statistics --- 00:17:41.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.968 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:17:41.968 00:17:41.968 --- 10.0.0.1 ping statistics --- 00:17:41.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.968 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=3088844 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 3088844 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3088844 ']' 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.968 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:41.968 [2024-10-01 15:35:21.019435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:41.968 [2024-10-01 15:35:21.019499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.968 [2024-10-01 15:35:21.062860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:41.968 [2024-10-01 15:35:21.113103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.968 [2024-10-01 15:35:21.160193] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.968 [2024-10-01 15:35:21.160245] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.968 [2024-10-01 15:35:21.160253] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.968 [2024-10-01 15:35:21.160261] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.968 [2024-10-01 15:35:21.160267] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.968 [2024-10-01 15:35:21.160452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.968 [2024-10-01 15:35:21.160586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.968 [2024-10-01 15:35:21.160742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.968 [2024-10-01 15:35:21.160743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:42.542 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12089 00:17:42.803 [2024-10-01 15:35:22.064345] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:42.803 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:42.803 { 00:17:42.803 "nqn": "nqn.2016-06.io.spdk:cnode12089", 00:17:42.803 "tgt_name": "foobar", 00:17:42.803 "method": "nvmf_create_subsystem", 00:17:42.803 "req_id": 1 00:17:42.803 } 00:17:42.803 Got JSON-RPC error response 00:17:42.803 response: 00:17:42.803 { 00:17:42.803 "code": -32603, 00:17:42.803 "message": "Unable to find target foobar" 00:17:42.803 }' 00:17:42.803 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:42.803 { 00:17:42.803 "nqn": "nqn.2016-06.io.spdk:cnode12089", 00:17:42.803 "tgt_name": "foobar", 00:17:42.803 "method": "nvmf_create_subsystem", 00:17:42.803 "req_id": 1 00:17:42.803 } 00:17:42.803 Got JSON-RPC error response 00:17:42.803 response: 00:17:42.803 { 00:17:42.803 "code": -32603, 00:17:42.803 "message": "Unable to find target foobar" 00:17:42.803 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:42.803 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:42.803 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23641 00:17:43.064 [2024-10-01 15:35:22.273234] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23641: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:43.064 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:43.064 { 00:17:43.064 "nqn": "nqn.2016-06.io.spdk:cnode23641", 00:17:43.064 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:43.065 "method": "nvmf_create_subsystem", 00:17:43.065 "req_id": 1 00:17:43.065 } 00:17:43.065 Got JSON-RPC error response 00:17:43.065 response: 00:17:43.065 { 00:17:43.065 "code": -32602, 00:17:43.065 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:43.065 }' 00:17:43.065 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:43.065 { 00:17:43.065 "nqn": "nqn.2016-06.io.spdk:cnode23641", 00:17:43.065 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:43.065 "method": "nvmf_create_subsystem", 00:17:43.065 "req_id": 1 00:17:43.065 } 00:17:43.065 Got JSON-RPC error response 00:17:43.065 response: 00:17:43.065 { 00:17:43.065 "code": -32602, 00:17:43.065 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:43.065 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:43.065 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:43.065 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10983 00:17:43.065 [2024-10-01 15:35:22.481946] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10983: invalid model number 'SPDK_Controller' 00:17:43.065 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:43.065 { 00:17:43.065 "nqn": "nqn.2016-06.io.spdk:cnode10983", 00:17:43.065 "model_number": "SPDK_Controller\u001f", 00:17:43.065 "method": "nvmf_create_subsystem", 00:17:43.065 "req_id": 1 00:17:43.065 } 00:17:43.065 Got JSON-RPC error response 00:17:43.065 response: 00:17:43.065 { 00:17:43.065 "code": -32602, 00:17:43.065 "message": "Invalid MN SPDK_Controller\u001f" 00:17:43.065 }' 00:17:43.065 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:43.065 { 00:17:43.065 "nqn": "nqn.2016-06.io.spdk:cnode10983", 00:17:43.065 "model_number": "SPDK_Controller\u001f", 00:17:43.065 "method": "nvmf_create_subsystem", 00:17:43.065 "req_id": 1 00:17:43.065 } 00:17:43.065 Got JSON-RPC error response 00:17:43.065 response: 00:17:43.065 { 00:17:43.065 "code": -32602, 00:17:43.065 "message": "Invalid MN SPDK_Controller\u001f" 00:17:43.065 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.327 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U7nv~qt\H.F68(R,4lx&]' 00:17:43.328 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'U7nv~qt\H.F68(R,4lx&]' nqn.2016-06.io.spdk:cnode10971 00:17:43.591 [2024-10-01 15:35:22.863400] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10971: invalid serial number 'U7nv~qt\H.F68(R,4lx&]' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:43.591 { 00:17:43.591 "nqn": "nqn.2016-06.io.spdk:cnode10971", 00:17:43.591 "serial_number": "U7nv~qt\\H.F68(R,4lx&]", 00:17:43.591 "method": "nvmf_create_subsystem", 00:17:43.591 "req_id": 1 00:17:43.591 } 00:17:43.591 Got JSON-RPC error response 00:17:43.591 response: 00:17:43.591 { 00:17:43.591 "code": -32602, 00:17:43.591 "message": "Invalid SN U7nv~qt\\H.F68(R,4lx&]" 00:17:43.591 }' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:43.591 { 00:17:43.591 "nqn": "nqn.2016-06.io.spdk:cnode10971", 00:17:43.591 "serial_number": "U7nv~qt\\H.F68(R,4lx&]", 00:17:43.591 "method": "nvmf_create_subsystem", 00:17:43.591 "req_id": 1 00:17:43.591 } 00:17:43.591 Got JSON-RPC error response 00:17:43.591 response: 00:17:43.591 { 00:17:43.591 "code": -32602, 00:17:43.591 "message": "Invalid SN U7nv~qt\\H.F68(R,4lx&]" 00:17:43.591 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:43.591 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.592 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.855 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'UU9n!Gr+8g8(ZP}FC}trx<2C<.kn{1S.p\m-OKx>K' 00:17:43.856 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'UU9n!Gr+8g8(ZP}FC}trx<2C<.kn{1S.p\m-OKx>K' nqn.2016-06.io.spdk:cnode3418 00:17:44.117 [2024-10-01 15:35:23.405526] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3418: invalid model number 'UU9n!Gr+8g8(ZP}FC}trx<2C<.kn{1S.p\m-OKx>K' 00:17:44.117 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:44.117 { 00:17:44.117 "nqn": "nqn.2016-06.io.spdk:cnode3418", 00:17:44.117 "model_number": "UU9n!Gr+8g8(ZP}FC}trx<2C<.kn{1S.p\\m-OKx>K", 00:17:44.117 "method": "nvmf_create_subsystem", 00:17:44.117 "req_id": 1 00:17:44.117 } 00:17:44.117 Got JSON-RPC error response 00:17:44.117 response: 00:17:44.117 { 00:17:44.117 "code": -32602, 00:17:44.117 "message": "Invalid MN UU9n!Gr+8g8(ZP}FC}trx<2C<.kn{1S.p\\m-OKx>K" 00:17:44.117 }' 00:17:44.117 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:44.117 { 00:17:44.117 "nqn": "nqn.2016-06.io.spdk:cnode3418", 00:17:44.117 "model_number": "UU9n!Gr+8g8(ZP}FC}trx<2C<.kn{1S.p\\m-OKx>K", 00:17:44.117 "method": "nvmf_create_subsystem", 00:17:44.117 "req_id": 1 00:17:44.117 } 00:17:44.118 Got JSON-RPC error response 00:17:44.118 response: 00:17:44.118 { 00:17:44.118 "code": -32602, 00:17:44.118 "message": "Invalid MN UU9n!Gr+8g8(ZP}FC}trx<2C<.kn{1S.p\\m-OKx>K" 00:17:44.118 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:44.118 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:44.379 [2024-10-01 15:35:23.606437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.379 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:44.640 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:44.640 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:44.640 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:44.640 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:44.640 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:44.640 [2024-10-01 15:35:24.003733] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:44.640 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:44.640 { 00:17:44.640 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:44.640 "listen_address": { 00:17:44.640 "trtype": "tcp", 00:17:44.640 "traddr": "", 00:17:44.640 "trsvcid": "4421" 00:17:44.640 }, 00:17:44.640 "method": "nvmf_subsystem_remove_listener", 00:17:44.640 "req_id": 1 00:17:44.640 } 00:17:44.640 Got JSON-RPC error response 00:17:44.640 response: 00:17:44.640 { 00:17:44.640 "code": -32602, 00:17:44.640 "message": "Invalid parameters" 00:17:44.640 }' 00:17:44.640 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:44.640 { 00:17:44.640 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:44.640 "listen_address": { 00:17:44.640 "trtype": "tcp", 00:17:44.640 "traddr": "", 00:17:44.640 "trsvcid": "4421" 00:17:44.640 }, 00:17:44.640 "method": "nvmf_subsystem_remove_listener", 00:17:44.640 "req_id": 1 00:17:44.640 } 00:17:44.640 Got JSON-RPC error response 00:17:44.640 response: 00:17:44.640 { 00:17:44.640 "code": -32602, 00:17:44.640 "message": "Invalid parameters" 00:17:44.640 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:44.640 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21985 -i 0 00:17:44.902 [2024-10-01 15:35:24.188263] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21985: invalid cntlid range [0-65519] 00:17:44.902 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:44.902 { 00:17:44.902 "nqn": "nqn.2016-06.io.spdk:cnode21985", 00:17:44.902 "min_cntlid": 0, 00:17:44.902 "method": "nvmf_create_subsystem", 00:17:44.902 "req_id": 1 00:17:44.902 } 00:17:44.902 Got JSON-RPC error response 00:17:44.902 response: 00:17:44.902 { 00:17:44.902 "code": -32602, 00:17:44.902 "message": "Invalid cntlid range [0-65519]" 00:17:44.902 }' 00:17:44.902 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:44.902 { 00:17:44.902 "nqn": "nqn.2016-06.io.spdk:cnode21985", 00:17:44.902 "min_cntlid": 0, 00:17:44.902 "method": "nvmf_create_subsystem", 00:17:44.902 "req_id": 1 00:17:44.902 } 00:17:44.902 Got JSON-RPC error response 00:17:44.902 response: 00:17:44.902 { 00:17:44.902 "code": -32602, 00:17:44.902 "message": "Invalid cntlid range [0-65519]" 00:17:44.902 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:44.902 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24717 -i 65520 00:17:45.164 [2024-10-01 15:35:24.376829] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24717: invalid cntlid range [65520-65519] 00:17:45.164 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:45.164 { 00:17:45.164 "nqn": "nqn.2016-06.io.spdk:cnode24717", 00:17:45.164 "min_cntlid": 65520, 00:17:45.164 "method": "nvmf_create_subsystem", 00:17:45.164 "req_id": 1 00:17:45.164 } 00:17:45.164 Got JSON-RPC error response 00:17:45.164 response: 00:17:45.164 { 00:17:45.164 "code": -32602, 00:17:45.164 "message": "Invalid cntlid range [65520-65519]" 00:17:45.164 }' 00:17:45.164 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:45.164 { 00:17:45.164 "nqn": "nqn.2016-06.io.spdk:cnode24717", 00:17:45.164 "min_cntlid": 65520, 00:17:45.164 "method": "nvmf_create_subsystem", 00:17:45.164 "req_id": 1 00:17:45.164 } 00:17:45.164 Got JSON-RPC error response 00:17:45.164 response: 00:17:45.164 { 00:17:45.164 "code": -32602, 00:17:45.164 "message": "Invalid cntlid range [65520-65519]" 00:17:45.164 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.164 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode736 -I 0 00:17:45.164 [2024-10-01 15:35:24.561425] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode736: invalid cntlid range [1-0] 00:17:45.164 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:45.164 { 00:17:45.164 "nqn": "nqn.2016-06.io.spdk:cnode736", 00:17:45.164 "max_cntlid": 0, 00:17:45.164 "method": "nvmf_create_subsystem", 00:17:45.164 "req_id": 1 00:17:45.164 } 00:17:45.164 Got JSON-RPC error response 00:17:45.164 response: 00:17:45.164 { 00:17:45.164 "code": -32602, 00:17:45.164 "message": "Invalid cntlid range [1-0]" 00:17:45.164 }' 00:17:45.164 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:45.164 { 00:17:45.164 "nqn": "nqn.2016-06.io.spdk:cnode736", 00:17:45.164 "max_cntlid": 0, 00:17:45.164 "method": "nvmf_create_subsystem", 00:17:45.164 "req_id": 1 00:17:45.164 } 00:17:45.164 Got JSON-RPC error response 00:17:45.164 response: 00:17:45.164 { 00:17:45.164 "code": -32602, 00:17:45.164 "message": "Invalid cntlid range [1-0]" 00:17:45.164 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.164 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7745 -I 65520 00:17:45.425 [2024-10-01 15:35:24.750030] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7745: invalid cntlid range [1-65520] 00:17:45.425 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:45.425 { 00:17:45.425 "nqn": "nqn.2016-06.io.spdk:cnode7745", 00:17:45.425 "max_cntlid": 65520, 00:17:45.425 "method": "nvmf_create_subsystem", 00:17:45.425 "req_id": 1 00:17:45.425 } 00:17:45.425 Got JSON-RPC error response 00:17:45.425 response: 00:17:45.425 { 00:17:45.425 "code": -32602, 00:17:45.425 "message": "Invalid cntlid range [1-65520]" 00:17:45.425 }' 00:17:45.425 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:45.425 { 00:17:45.425 "nqn": "nqn.2016-06.io.spdk:cnode7745", 00:17:45.425 "max_cntlid": 65520, 00:17:45.425 "method": "nvmf_create_subsystem", 00:17:45.425 "req_id": 1 00:17:45.425 } 00:17:45.425 Got JSON-RPC error response 00:17:45.425 response: 00:17:45.425 { 00:17:45.425 "code": -32602, 00:17:45.425 "message": "Invalid cntlid range [1-65520]" 00:17:45.425 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.425 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14622 -i 6 -I 5 00:17:45.686 [2024-10-01 15:35:24.930600] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14622: invalid cntlid range [6-5] 00:17:45.686 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:45.686 { 00:17:45.686 "nqn": "nqn.2016-06.io.spdk:cnode14622", 00:17:45.686 "min_cntlid": 6, 00:17:45.686 "max_cntlid": 5, 00:17:45.686 "method": "nvmf_create_subsystem", 00:17:45.686 "req_id": 1 00:17:45.686 } 00:17:45.686 Got JSON-RPC error response 00:17:45.686 response: 00:17:45.686 { 00:17:45.686 "code": -32602, 00:17:45.686 "message": "Invalid cntlid range [6-5]" 00:17:45.686 }' 00:17:45.686 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:45.686 { 00:17:45.686 "nqn": "nqn.2016-06.io.spdk:cnode14622", 00:17:45.686 "min_cntlid": 6, 00:17:45.686 "max_cntlid": 5, 00:17:45.686 "method": "nvmf_create_subsystem", 00:17:45.686 "req_id": 1 00:17:45.686 } 00:17:45.686 Got JSON-RPC error response 00:17:45.686 response: 00:17:45.686 { 00:17:45.686 "code": -32602, 00:17:45.686 "message": "Invalid cntlid range [6-5]" 00:17:45.686 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.686 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:45.686 { 00:17:45.686 "name": "foobar", 00:17:45.686 "method": "nvmf_delete_target", 00:17:45.686 "req_id": 1 00:17:45.686 } 00:17:45.686 Got JSON-RPC error response 00:17:45.686 response: 00:17:45.686 { 00:17:45.686 "code": -32602, 00:17:45.686 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:45.686 }' 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:45.686 { 00:17:45.686 "name": "foobar", 00:17:45.686 "method": "nvmf_delete_target", 00:17:45.686 "req_id": 1 00:17:45.686 } 00:17:45.686 Got JSON-RPC error response 00:17:45.686 response: 00:17:45.686 { 00:17:45.686 "code": -32602, 00:17:45.686 "message": "The specified target doesn't exist, cannot delete it." 00:17:45.686 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.686 rmmod nvme_tcp 00:17:45.686 rmmod nvme_fabrics 00:17:45.686 rmmod nvme_keyring 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 3088844 ']' 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 3088844 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3088844 ']' 00:17:45.686 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3088844 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3088844 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3088844' 00:17:45.948 killing process with pid 3088844 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3088844 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3088844 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.948 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:48.499 00:17:48.499 real 0m14.420s 00:17:48.499 user 0m21.322s 00:17:48.499 sys 0m6.860s 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:48.499 ************************************ 00:17:48.499 END TEST nvmf_invalid 00:17:48.499 ************************************ 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:48.499 ************************************ 00:17:48.499 START TEST nvmf_connect_stress 00:17:48.499 ************************************ 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:48.499 * Looking for test storage... 00:17:48.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.499 --rc genhtml_branch_coverage=1 00:17:48.499 --rc genhtml_function_coverage=1 00:17:48.499 --rc genhtml_legend=1 00:17:48.499 --rc geninfo_all_blocks=1 00:17:48.499 --rc geninfo_unexecuted_blocks=1 00:17:48.499 00:17:48.499 ' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.499 --rc genhtml_branch_coverage=1 00:17:48.499 --rc genhtml_function_coverage=1 00:17:48.499 --rc genhtml_legend=1 00:17:48.499 --rc geninfo_all_blocks=1 00:17:48.499 --rc geninfo_unexecuted_blocks=1 00:17:48.499 00:17:48.499 ' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.499 --rc genhtml_branch_coverage=1 00:17:48.499 --rc genhtml_function_coverage=1 00:17:48.499 --rc genhtml_legend=1 00:17:48.499 --rc geninfo_all_blocks=1 00:17:48.499 --rc geninfo_unexecuted_blocks=1 00:17:48.499 00:17:48.499 ' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.499 --rc genhtml_branch_coverage=1 00:17:48.499 --rc genhtml_function_coverage=1 00:17:48.499 --rc genhtml_legend=1 00:17:48.499 --rc geninfo_all_blocks=1 00:17:48.499 --rc geninfo_unexecuted_blocks=1 00:17:48.499 00:17:48.499 ' 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.499 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:48.500 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:56.639 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:56.639 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.639 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:56.640 Found net devices under 0000:31:00.0: cvl_0_0 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:56.640 Found net devices under 0000:31:00.1: cvl_0_1 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:56.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:17:56.640 00:17:56.640 --- 10.0.0.2 ping statistics --- 00:17:56.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.640 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:17:56.640 00:17:56.640 --- 10.0.0.1 ping statistics --- 00:17:56.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.640 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=3094095 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 3094095 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3094095 ']' 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.640 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.640 [2024-10-01 15:35:35.527974] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:56.640 [2024-10-01 15:35:35.528046] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.640 [2024-10-01 15:35:35.569770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:56.640 [2024-10-01 15:35:35.618318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:56.640 [2024-10-01 15:35:35.665121] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.640 [2024-10-01 15:35:35.665173] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.640 [2024-10-01 15:35:35.665182] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.640 [2024-10-01 15:35:35.665189] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.640 [2024-10-01 15:35:35.665195] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.640 [2024-10-01 15:35:35.665351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.640 [2024-10-01 15:35:35.665507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.640 [2024-10-01 15:35:35.665509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.901 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.901 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:56.901 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:56.901 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.901 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.161 [2024-10-01 15:35:36.377092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.161 [2024-10-01 15:35:36.418510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.161 NULL1 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3094292 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.161 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.162 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.733 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.733 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:57.733 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.733 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.733 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.995 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.995 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:57.995 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.995 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.995 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.255 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.255 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:58.255 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.255 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.255 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.516 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.516 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:58.516 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.516 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.516 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.776 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.776 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:58.776 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.776 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.776 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.352 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.352 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:59.352 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.352 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.352 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.656 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.656 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:59.656 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.656 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.656 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.916 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.916 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:17:59.916 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.916 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.916 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.177 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.177 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:00.177 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.177 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.177 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.438 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.438 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:00.438 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.438 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.438 15:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.698 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.698 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:00.698 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.698 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.698 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.269 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.269 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:01.269 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.269 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.269 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.530 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.530 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:01.530 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.530 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.530 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.791 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.791 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:01.791 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.791 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.791 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.052 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:02.052 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.052 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.052 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.313 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.313 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:02.313 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.313 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.313 15:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.883 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.883 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:02.883 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.883 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.883 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.143 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.143 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:03.143 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.143 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.143 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.405 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.405 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:03.405 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.405 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.405 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.665 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.665 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:03.665 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.665 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.665 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.237 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.237 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:04.237 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.237 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.237 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.498 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.498 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:04.498 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.498 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.498 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.759 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.759 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:04.759 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.759 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.759 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.020 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.020 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:05.020 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.020 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.020 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.280 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.280 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:05.280 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.280 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.280 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.852 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.852 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:05.852 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.852 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.852 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.113 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.113 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:06.113 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.113 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.113 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.374 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.374 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:06.374 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.374 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.374 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.635 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.635 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:06.635 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.635 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.635 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.897 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.897 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:06.897 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.897 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.897 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.157 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3094292 00:18:07.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3094292) - No such process 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3094292 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.419 rmmod nvme_tcp 00:18:07.419 rmmod nvme_fabrics 00:18:07.419 rmmod nvme_keyring 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 3094095 ']' 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 3094095 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3094095 ']' 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3094095 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3094095 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:07.419 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3094095' 00:18:07.419 killing process with pid 3094095 00:18:07.420 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3094095 00:18:07.420 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3094095 00:18:07.680 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:07.680 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:07.680 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:07.680 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:07.680 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:07.680 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:18:07.680 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:18:07.681 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.681 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.681 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.681 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.681 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.595 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:09.595 00:18:09.595 real 0m21.488s 00:18:09.595 user 0m42.200s 00:18:09.595 sys 0m9.473s 00:18:09.595 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.595 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.595 ************************************ 00:18:09.595 END TEST nvmf_connect_stress 00:18:09.595 ************************************ 00:18:09.595 15:35:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:09.595 15:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:09.595 15:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.595 15:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.855 ************************************ 00:18:09.855 START TEST nvmf_fused_ordering 00:18:09.855 ************************************ 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:09.855 * Looking for test storage... 00:18:09.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.855 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:09.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.856 --rc genhtml_branch_coverage=1 00:18:09.856 --rc genhtml_function_coverage=1 00:18:09.856 --rc genhtml_legend=1 00:18:09.856 --rc geninfo_all_blocks=1 00:18:09.856 --rc geninfo_unexecuted_blocks=1 00:18:09.856 00:18:09.856 ' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:09.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.856 --rc genhtml_branch_coverage=1 00:18:09.856 --rc genhtml_function_coverage=1 00:18:09.856 --rc genhtml_legend=1 00:18:09.856 --rc geninfo_all_blocks=1 00:18:09.856 --rc geninfo_unexecuted_blocks=1 00:18:09.856 00:18:09.856 ' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:09.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.856 --rc genhtml_branch_coverage=1 00:18:09.856 --rc genhtml_function_coverage=1 00:18:09.856 --rc genhtml_legend=1 00:18:09.856 --rc geninfo_all_blocks=1 00:18:09.856 --rc geninfo_unexecuted_blocks=1 00:18:09.856 00:18:09.856 ' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:09.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.856 --rc genhtml_branch_coverage=1 00:18:09.856 --rc genhtml_function_coverage=1 00:18:09.856 --rc genhtml_legend=1 00:18:09.856 --rc geninfo_all_blocks=1 00:18:09.856 --rc geninfo_unexecuted_blocks=1 00:18:09.856 00:18:09.856 ' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:09.856 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:18.001 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:18.002 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:18.002 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:18.002 Found net devices under 0000:31:00.0: cvl_0_0 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:18.002 Found net devices under 0000:31:00.1: cvl_0_1 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:18.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:18:18.002 00:18:18.002 --- 10.0.0.2 ping statistics --- 00:18:18.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.002 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:18:18.002 00:18:18.002 --- 10.0.0.1 ping statistics --- 00:18:18.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.002 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:18.002 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=3100549 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 3100549 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3100549 ']' 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.003 15:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.003 [2024-10-01 15:35:57.042370] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:18.003 [2024-10-01 15:35:57.042459] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.003 [2024-10-01 15:35:57.084046] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:18.003 [2024-10-01 15:35:57.131853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.003 [2024-10-01 15:35:57.177504] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.003 [2024-10-01 15:35:57.177557] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.003 [2024-10-01 15:35:57.177569] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.003 [2024-10-01 15:35:57.177576] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.003 [2024-10-01 15:35:57.177582] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.003 [2024-10-01 15:35:57.177603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.576 [2024-10-01 15:35:57.900488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.576 [2024-10-01 15:35:57.924764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.576 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 NULL1 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.577 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:18.577 [2024-10-01 15:35:57.993411] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:18.577 [2024-10-01 15:35:57.993460] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100895 ] 00:18:18.577 [2024-10-01 15:35:58.028349] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:19.151 Attached to nqn.2016-06.io.spdk:cnode1 00:18:19.151 Namespace ID: 1 size: 1GB 00:18:19.151 fused_ordering(0) 00:18:19.151 fused_ordering(1) 00:18:19.151 fused_ordering(2) 00:18:19.151 fused_ordering(3) 00:18:19.151 fused_ordering(4) 00:18:19.151 fused_ordering(5) 00:18:19.151 fused_ordering(6) 00:18:19.151 fused_ordering(7) 00:18:19.151 fused_ordering(8) 00:18:19.151 fused_ordering(9) 00:18:19.151 fused_ordering(10) 00:18:19.151 fused_ordering(11) 00:18:19.151 fused_ordering(12) 00:18:19.151 fused_ordering(13) 00:18:19.151 fused_ordering(14) 00:18:19.151 fused_ordering(15) 00:18:19.151 fused_ordering(16) 00:18:19.151 fused_ordering(17) 00:18:19.151 fused_ordering(18) 00:18:19.151 fused_ordering(19) 00:18:19.151 fused_ordering(20) 00:18:19.151 fused_ordering(21) 00:18:19.151 fused_ordering(22) 00:18:19.151 fused_ordering(23) 00:18:19.151 fused_ordering(24) 00:18:19.151 fused_ordering(25) 00:18:19.151 fused_ordering(26) 00:18:19.151 fused_ordering(27) 00:18:19.151 fused_ordering(28) 00:18:19.151 fused_ordering(29) 00:18:19.151 fused_ordering(30) 00:18:19.151 fused_ordering(31) 00:18:19.151 fused_ordering(32) 00:18:19.151 fused_ordering(33) 00:18:19.151 fused_ordering(34) 00:18:19.151 fused_ordering(35) 00:18:19.151 fused_ordering(36) 00:18:19.151 fused_ordering(37) 00:18:19.151 fused_ordering(38) 00:18:19.151 fused_ordering(39) 00:18:19.151 fused_ordering(40) 00:18:19.151 fused_ordering(41) 00:18:19.151 fused_ordering(42) 00:18:19.151 fused_ordering(43) 00:18:19.151 fused_ordering(44) 00:18:19.151 fused_ordering(45) 00:18:19.151 fused_ordering(46) 00:18:19.151 fused_ordering(47) 00:18:19.151 fused_ordering(48) 00:18:19.151 fused_ordering(49) 00:18:19.151 fused_ordering(50) 00:18:19.151 fused_ordering(51) 00:18:19.151 fused_ordering(52) 00:18:19.151 fused_ordering(53) 00:18:19.151 fused_ordering(54) 00:18:19.151 fused_ordering(55) 00:18:19.151 fused_ordering(56) 00:18:19.151 fused_ordering(57) 00:18:19.151 fused_ordering(58) 00:18:19.151 fused_ordering(59) 00:18:19.151 fused_ordering(60) 00:18:19.151 fused_ordering(61) 00:18:19.151 fused_ordering(62) 00:18:19.151 fused_ordering(63) 00:18:19.151 fused_ordering(64) 00:18:19.151 fused_ordering(65) 00:18:19.151 fused_ordering(66) 00:18:19.151 fused_ordering(67) 00:18:19.151 fused_ordering(68) 00:18:19.151 fused_ordering(69) 00:18:19.151 fused_ordering(70) 00:18:19.151 fused_ordering(71) 00:18:19.151 fused_ordering(72) 00:18:19.151 fused_ordering(73) 00:18:19.151 fused_ordering(74) 00:18:19.151 fused_ordering(75) 00:18:19.151 fused_ordering(76) 00:18:19.151 fused_ordering(77) 00:18:19.151 fused_ordering(78) 00:18:19.151 fused_ordering(79) 00:18:19.151 fused_ordering(80) 00:18:19.151 fused_ordering(81) 00:18:19.151 fused_ordering(82) 00:18:19.151 fused_ordering(83) 00:18:19.151 fused_ordering(84) 00:18:19.151 fused_ordering(85) 00:18:19.151 fused_ordering(86) 00:18:19.151 fused_ordering(87) 00:18:19.151 fused_ordering(88) 00:18:19.151 fused_ordering(89) 00:18:19.151 fused_ordering(90) 00:18:19.151 fused_ordering(91) 00:18:19.151 fused_ordering(92) 00:18:19.151 fused_ordering(93) 00:18:19.151 fused_ordering(94) 00:18:19.151 fused_ordering(95) 00:18:19.151 fused_ordering(96) 00:18:19.151 fused_ordering(97) 00:18:19.151 fused_ordering(98) 00:18:19.151 fused_ordering(99) 00:18:19.151 fused_ordering(100) 00:18:19.151 fused_ordering(101) 00:18:19.151 fused_ordering(102) 00:18:19.151 fused_ordering(103) 00:18:19.151 fused_ordering(104) 00:18:19.151 fused_ordering(105) 00:18:19.151 fused_ordering(106) 00:18:19.151 fused_ordering(107) 00:18:19.151 fused_ordering(108) 00:18:19.151 fused_ordering(109) 00:18:19.151 fused_ordering(110) 00:18:19.151 fused_ordering(111) 00:18:19.151 fused_ordering(112) 00:18:19.151 fused_ordering(113) 00:18:19.151 fused_ordering(114) 00:18:19.151 fused_ordering(115) 00:18:19.151 fused_ordering(116) 00:18:19.151 fused_ordering(117) 00:18:19.151 fused_ordering(118) 00:18:19.151 fused_ordering(119) 00:18:19.151 fused_ordering(120) 00:18:19.151 fused_ordering(121) 00:18:19.151 fused_ordering(122) 00:18:19.151 fused_ordering(123) 00:18:19.151 fused_ordering(124) 00:18:19.151 fused_ordering(125) 00:18:19.151 fused_ordering(126) 00:18:19.151 fused_ordering(127) 00:18:19.151 fused_ordering(128) 00:18:19.151 fused_ordering(129) 00:18:19.151 fused_ordering(130) 00:18:19.151 fused_ordering(131) 00:18:19.151 fused_ordering(132) 00:18:19.151 fused_ordering(133) 00:18:19.151 fused_ordering(134) 00:18:19.151 fused_ordering(135) 00:18:19.151 fused_ordering(136) 00:18:19.151 fused_ordering(137) 00:18:19.151 fused_ordering(138) 00:18:19.151 fused_ordering(139) 00:18:19.151 fused_ordering(140) 00:18:19.151 fused_ordering(141) 00:18:19.151 fused_ordering(142) 00:18:19.151 fused_ordering(143) 00:18:19.151 fused_ordering(144) 00:18:19.151 fused_ordering(145) 00:18:19.151 fused_ordering(146) 00:18:19.151 fused_ordering(147) 00:18:19.151 fused_ordering(148) 00:18:19.151 fused_ordering(149) 00:18:19.151 fused_ordering(150) 00:18:19.151 fused_ordering(151) 00:18:19.151 fused_ordering(152) 00:18:19.151 fused_ordering(153) 00:18:19.151 fused_ordering(154) 00:18:19.151 fused_ordering(155) 00:18:19.151 fused_ordering(156) 00:18:19.151 fused_ordering(157) 00:18:19.151 fused_ordering(158) 00:18:19.151 fused_ordering(159) 00:18:19.151 fused_ordering(160) 00:18:19.151 fused_ordering(161) 00:18:19.151 fused_ordering(162) 00:18:19.151 fused_ordering(163) 00:18:19.151 fused_ordering(164) 00:18:19.151 fused_ordering(165) 00:18:19.151 fused_ordering(166) 00:18:19.151 fused_ordering(167) 00:18:19.151 fused_ordering(168) 00:18:19.151 fused_ordering(169) 00:18:19.151 fused_ordering(170) 00:18:19.151 fused_ordering(171) 00:18:19.151 fused_ordering(172) 00:18:19.151 fused_ordering(173) 00:18:19.151 fused_ordering(174) 00:18:19.151 fused_ordering(175) 00:18:19.151 fused_ordering(176) 00:18:19.151 fused_ordering(177) 00:18:19.151 fused_ordering(178) 00:18:19.151 fused_ordering(179) 00:18:19.151 fused_ordering(180) 00:18:19.151 fused_ordering(181) 00:18:19.151 fused_ordering(182) 00:18:19.151 fused_ordering(183) 00:18:19.151 fused_ordering(184) 00:18:19.151 fused_ordering(185) 00:18:19.151 fused_ordering(186) 00:18:19.151 fused_ordering(187) 00:18:19.151 fused_ordering(188) 00:18:19.152 fused_ordering(189) 00:18:19.152 fused_ordering(190) 00:18:19.152 fused_ordering(191) 00:18:19.152 fused_ordering(192) 00:18:19.152 fused_ordering(193) 00:18:19.152 fused_ordering(194) 00:18:19.152 fused_ordering(195) 00:18:19.152 fused_ordering(196) 00:18:19.152 fused_ordering(197) 00:18:19.152 fused_ordering(198) 00:18:19.152 fused_ordering(199) 00:18:19.152 fused_ordering(200) 00:18:19.152 fused_ordering(201) 00:18:19.152 fused_ordering(202) 00:18:19.152 fused_ordering(203) 00:18:19.152 fused_ordering(204) 00:18:19.152 fused_ordering(205) 00:18:19.413 fused_ordering(206) 00:18:19.413 fused_ordering(207) 00:18:19.413 fused_ordering(208) 00:18:19.413 fused_ordering(209) 00:18:19.413 fused_ordering(210) 00:18:19.413 fused_ordering(211) 00:18:19.413 fused_ordering(212) 00:18:19.413 fused_ordering(213) 00:18:19.413 fused_ordering(214) 00:18:19.413 fused_ordering(215) 00:18:19.413 fused_ordering(216) 00:18:19.413 fused_ordering(217) 00:18:19.413 fused_ordering(218) 00:18:19.413 fused_ordering(219) 00:18:19.413 fused_ordering(220) 00:18:19.413 fused_ordering(221) 00:18:19.413 fused_ordering(222) 00:18:19.413 fused_ordering(223) 00:18:19.413 fused_ordering(224) 00:18:19.413 fused_ordering(225) 00:18:19.413 fused_ordering(226) 00:18:19.413 fused_ordering(227) 00:18:19.413 fused_ordering(228) 00:18:19.413 fused_ordering(229) 00:18:19.413 fused_ordering(230) 00:18:19.413 fused_ordering(231) 00:18:19.413 fused_ordering(232) 00:18:19.413 fused_ordering(233) 00:18:19.413 fused_ordering(234) 00:18:19.413 fused_ordering(235) 00:18:19.413 fused_ordering(236) 00:18:19.413 fused_ordering(237) 00:18:19.413 fused_ordering(238) 00:18:19.413 fused_ordering(239) 00:18:19.413 fused_ordering(240) 00:18:19.413 fused_ordering(241) 00:18:19.413 fused_ordering(242) 00:18:19.413 fused_ordering(243) 00:18:19.413 fused_ordering(244) 00:18:19.413 fused_ordering(245) 00:18:19.413 fused_ordering(246) 00:18:19.413 fused_ordering(247) 00:18:19.413 fused_ordering(248) 00:18:19.413 fused_ordering(249) 00:18:19.413 fused_ordering(250) 00:18:19.413 fused_ordering(251) 00:18:19.413 fused_ordering(252) 00:18:19.413 fused_ordering(253) 00:18:19.413 fused_ordering(254) 00:18:19.413 fused_ordering(255) 00:18:19.413 fused_ordering(256) 00:18:19.413 fused_ordering(257) 00:18:19.413 fused_ordering(258) 00:18:19.413 fused_ordering(259) 00:18:19.413 fused_ordering(260) 00:18:19.413 fused_ordering(261) 00:18:19.413 fused_ordering(262) 00:18:19.413 fused_ordering(263) 00:18:19.413 fused_ordering(264) 00:18:19.413 fused_ordering(265) 00:18:19.413 fused_ordering(266) 00:18:19.413 fused_ordering(267) 00:18:19.413 fused_ordering(268) 00:18:19.413 fused_ordering(269) 00:18:19.413 fused_ordering(270) 00:18:19.413 fused_ordering(271) 00:18:19.413 fused_ordering(272) 00:18:19.413 fused_ordering(273) 00:18:19.413 fused_ordering(274) 00:18:19.413 fused_ordering(275) 00:18:19.413 fused_ordering(276) 00:18:19.413 fused_ordering(277) 00:18:19.413 fused_ordering(278) 00:18:19.413 fused_ordering(279) 00:18:19.413 fused_ordering(280) 00:18:19.413 fused_ordering(281) 00:18:19.413 fused_ordering(282) 00:18:19.413 fused_ordering(283) 00:18:19.413 fused_ordering(284) 00:18:19.413 fused_ordering(285) 00:18:19.413 fused_ordering(286) 00:18:19.413 fused_ordering(287) 00:18:19.413 fused_ordering(288) 00:18:19.413 fused_ordering(289) 00:18:19.413 fused_ordering(290) 00:18:19.413 fused_ordering(291) 00:18:19.413 fused_ordering(292) 00:18:19.413 fused_ordering(293) 00:18:19.413 fused_ordering(294) 00:18:19.413 fused_ordering(295) 00:18:19.413 fused_ordering(296) 00:18:19.413 fused_ordering(297) 00:18:19.413 fused_ordering(298) 00:18:19.413 fused_ordering(299) 00:18:19.413 fused_ordering(300) 00:18:19.413 fused_ordering(301) 00:18:19.413 fused_ordering(302) 00:18:19.413 fused_ordering(303) 00:18:19.413 fused_ordering(304) 00:18:19.413 fused_ordering(305) 00:18:19.413 fused_ordering(306) 00:18:19.413 fused_ordering(307) 00:18:19.413 fused_ordering(308) 00:18:19.414 fused_ordering(309) 00:18:19.414 fused_ordering(310) 00:18:19.414 fused_ordering(311) 00:18:19.414 fused_ordering(312) 00:18:19.414 fused_ordering(313) 00:18:19.414 fused_ordering(314) 00:18:19.414 fused_ordering(315) 00:18:19.414 fused_ordering(316) 00:18:19.414 fused_ordering(317) 00:18:19.414 fused_ordering(318) 00:18:19.414 fused_ordering(319) 00:18:19.414 fused_ordering(320) 00:18:19.414 fused_ordering(321) 00:18:19.414 fused_ordering(322) 00:18:19.414 fused_ordering(323) 00:18:19.414 fused_ordering(324) 00:18:19.414 fused_ordering(325) 00:18:19.414 fused_ordering(326) 00:18:19.414 fused_ordering(327) 00:18:19.414 fused_ordering(328) 00:18:19.414 fused_ordering(329) 00:18:19.414 fused_ordering(330) 00:18:19.414 fused_ordering(331) 00:18:19.414 fused_ordering(332) 00:18:19.414 fused_ordering(333) 00:18:19.414 fused_ordering(334) 00:18:19.414 fused_ordering(335) 00:18:19.414 fused_ordering(336) 00:18:19.414 fused_ordering(337) 00:18:19.414 fused_ordering(338) 00:18:19.414 fused_ordering(339) 00:18:19.414 fused_ordering(340) 00:18:19.414 fused_ordering(341) 00:18:19.414 fused_ordering(342) 00:18:19.414 fused_ordering(343) 00:18:19.414 fused_ordering(344) 00:18:19.414 fused_ordering(345) 00:18:19.414 fused_ordering(346) 00:18:19.414 fused_ordering(347) 00:18:19.414 fused_ordering(348) 00:18:19.414 fused_ordering(349) 00:18:19.414 fused_ordering(350) 00:18:19.414 fused_ordering(351) 00:18:19.414 fused_ordering(352) 00:18:19.414 fused_ordering(353) 00:18:19.414 fused_ordering(354) 00:18:19.414 fused_ordering(355) 00:18:19.414 fused_ordering(356) 00:18:19.414 fused_ordering(357) 00:18:19.414 fused_ordering(358) 00:18:19.414 fused_ordering(359) 00:18:19.414 fused_ordering(360) 00:18:19.414 fused_ordering(361) 00:18:19.414 fused_ordering(362) 00:18:19.414 fused_ordering(363) 00:18:19.414 fused_ordering(364) 00:18:19.414 fused_ordering(365) 00:18:19.414 fused_ordering(366) 00:18:19.414 fused_ordering(367) 00:18:19.414 fused_ordering(368) 00:18:19.414 fused_ordering(369) 00:18:19.414 fused_ordering(370) 00:18:19.414 fused_ordering(371) 00:18:19.414 fused_ordering(372) 00:18:19.414 fused_ordering(373) 00:18:19.414 fused_ordering(374) 00:18:19.414 fused_ordering(375) 00:18:19.414 fused_ordering(376) 00:18:19.414 fused_ordering(377) 00:18:19.414 fused_ordering(378) 00:18:19.414 fused_ordering(379) 00:18:19.414 fused_ordering(380) 00:18:19.414 fused_ordering(381) 00:18:19.414 fused_ordering(382) 00:18:19.414 fused_ordering(383) 00:18:19.414 fused_ordering(384) 00:18:19.414 fused_ordering(385) 00:18:19.414 fused_ordering(386) 00:18:19.414 fused_ordering(387) 00:18:19.414 fused_ordering(388) 00:18:19.414 fused_ordering(389) 00:18:19.414 fused_ordering(390) 00:18:19.414 fused_ordering(391) 00:18:19.414 fused_ordering(392) 00:18:19.414 fused_ordering(393) 00:18:19.414 fused_ordering(394) 00:18:19.414 fused_ordering(395) 00:18:19.414 fused_ordering(396) 00:18:19.414 fused_ordering(397) 00:18:19.414 fused_ordering(398) 00:18:19.414 fused_ordering(399) 00:18:19.414 fused_ordering(400) 00:18:19.414 fused_ordering(401) 00:18:19.414 fused_ordering(402) 00:18:19.414 fused_ordering(403) 00:18:19.414 fused_ordering(404) 00:18:19.414 fused_ordering(405) 00:18:19.414 fused_ordering(406) 00:18:19.414 fused_ordering(407) 00:18:19.414 fused_ordering(408) 00:18:19.414 fused_ordering(409) 00:18:19.414 fused_ordering(410) 00:18:19.986 fused_ordering(411) 00:18:19.986 fused_ordering(412) 00:18:19.986 fused_ordering(413) 00:18:19.986 fused_ordering(414) 00:18:19.986 fused_ordering(415) 00:18:19.986 fused_ordering(416) 00:18:19.986 fused_ordering(417) 00:18:19.986 fused_ordering(418) 00:18:19.986 fused_ordering(419) 00:18:19.986 fused_ordering(420) 00:18:19.986 fused_ordering(421) 00:18:19.986 fused_ordering(422) 00:18:19.986 fused_ordering(423) 00:18:19.986 fused_ordering(424) 00:18:19.986 fused_ordering(425) 00:18:19.986 fused_ordering(426) 00:18:19.986 fused_ordering(427) 00:18:19.986 fused_ordering(428) 00:18:19.986 fused_ordering(429) 00:18:19.986 fused_ordering(430) 00:18:19.986 fused_ordering(431) 00:18:19.986 fused_ordering(432) 00:18:19.986 fused_ordering(433) 00:18:19.986 fused_ordering(434) 00:18:19.986 fused_ordering(435) 00:18:19.986 fused_ordering(436) 00:18:19.986 fused_ordering(437) 00:18:19.986 fused_ordering(438) 00:18:19.986 fused_ordering(439) 00:18:19.986 fused_ordering(440) 00:18:19.986 fused_ordering(441) 00:18:19.986 fused_ordering(442) 00:18:19.986 fused_ordering(443) 00:18:19.986 fused_ordering(444) 00:18:19.986 fused_ordering(445) 00:18:19.986 fused_ordering(446) 00:18:19.986 fused_ordering(447) 00:18:19.986 fused_ordering(448) 00:18:19.986 fused_ordering(449) 00:18:19.986 fused_ordering(450) 00:18:19.986 fused_ordering(451) 00:18:19.986 fused_ordering(452) 00:18:19.986 fused_ordering(453) 00:18:19.986 fused_ordering(454) 00:18:19.986 fused_ordering(455) 00:18:19.986 fused_ordering(456) 00:18:19.986 fused_ordering(457) 00:18:19.986 fused_ordering(458) 00:18:19.986 fused_ordering(459) 00:18:19.986 fused_ordering(460) 00:18:19.986 fused_ordering(461) 00:18:19.986 fused_ordering(462) 00:18:19.986 fused_ordering(463) 00:18:19.986 fused_ordering(464) 00:18:19.986 fused_ordering(465) 00:18:19.986 fused_ordering(466) 00:18:19.986 fused_ordering(467) 00:18:19.986 fused_ordering(468) 00:18:19.986 fused_ordering(469) 00:18:19.986 fused_ordering(470) 00:18:19.986 fused_ordering(471) 00:18:19.986 fused_ordering(472) 00:18:19.986 fused_ordering(473) 00:18:19.986 fused_ordering(474) 00:18:19.986 fused_ordering(475) 00:18:19.986 fused_ordering(476) 00:18:19.986 fused_ordering(477) 00:18:19.986 fused_ordering(478) 00:18:19.986 fused_ordering(479) 00:18:19.986 fused_ordering(480) 00:18:19.986 fused_ordering(481) 00:18:19.986 fused_ordering(482) 00:18:19.986 fused_ordering(483) 00:18:19.986 fused_ordering(484) 00:18:19.986 fused_ordering(485) 00:18:19.986 fused_ordering(486) 00:18:19.986 fused_ordering(487) 00:18:19.986 fused_ordering(488) 00:18:19.986 fused_ordering(489) 00:18:19.986 fused_ordering(490) 00:18:19.986 fused_ordering(491) 00:18:19.986 fused_ordering(492) 00:18:19.986 fused_ordering(493) 00:18:19.986 fused_ordering(494) 00:18:19.986 fused_ordering(495) 00:18:19.986 fused_ordering(496) 00:18:19.986 fused_ordering(497) 00:18:19.986 fused_ordering(498) 00:18:19.986 fused_ordering(499) 00:18:19.986 fused_ordering(500) 00:18:19.986 fused_ordering(501) 00:18:19.986 fused_ordering(502) 00:18:19.986 fused_ordering(503) 00:18:19.986 fused_ordering(504) 00:18:19.986 fused_ordering(505) 00:18:19.986 fused_ordering(506) 00:18:19.986 fused_ordering(507) 00:18:19.986 fused_ordering(508) 00:18:19.986 fused_ordering(509) 00:18:19.986 fused_ordering(510) 00:18:19.986 fused_ordering(511) 00:18:19.986 fused_ordering(512) 00:18:19.986 fused_ordering(513) 00:18:19.986 fused_ordering(514) 00:18:19.986 fused_ordering(515) 00:18:19.986 fused_ordering(516) 00:18:19.986 fused_ordering(517) 00:18:19.986 fused_ordering(518) 00:18:19.986 fused_ordering(519) 00:18:19.986 fused_ordering(520) 00:18:19.986 fused_ordering(521) 00:18:19.986 fused_ordering(522) 00:18:19.986 fused_ordering(523) 00:18:19.986 fused_ordering(524) 00:18:19.986 fused_ordering(525) 00:18:19.986 fused_ordering(526) 00:18:19.986 fused_ordering(527) 00:18:19.986 fused_ordering(528) 00:18:19.986 fused_ordering(529) 00:18:19.986 fused_ordering(530) 00:18:19.986 fused_ordering(531) 00:18:19.986 fused_ordering(532) 00:18:19.986 fused_ordering(533) 00:18:19.986 fused_ordering(534) 00:18:19.986 fused_ordering(535) 00:18:19.986 fused_ordering(536) 00:18:19.986 fused_ordering(537) 00:18:19.986 fused_ordering(538) 00:18:19.986 fused_ordering(539) 00:18:19.986 fused_ordering(540) 00:18:19.986 fused_ordering(541) 00:18:19.986 fused_ordering(542) 00:18:19.986 fused_ordering(543) 00:18:19.986 fused_ordering(544) 00:18:19.986 fused_ordering(545) 00:18:19.986 fused_ordering(546) 00:18:19.986 fused_ordering(547) 00:18:19.986 fused_ordering(548) 00:18:19.986 fused_ordering(549) 00:18:19.986 fused_ordering(550) 00:18:19.986 fused_ordering(551) 00:18:19.986 fused_ordering(552) 00:18:19.986 fused_ordering(553) 00:18:19.986 fused_ordering(554) 00:18:19.986 fused_ordering(555) 00:18:19.987 fused_ordering(556) 00:18:19.987 fused_ordering(557) 00:18:19.987 fused_ordering(558) 00:18:19.987 fused_ordering(559) 00:18:19.987 fused_ordering(560) 00:18:19.987 fused_ordering(561) 00:18:19.987 fused_ordering(562) 00:18:19.987 fused_ordering(563) 00:18:19.987 fused_ordering(564) 00:18:19.987 fused_ordering(565) 00:18:19.987 fused_ordering(566) 00:18:19.987 fused_ordering(567) 00:18:19.987 fused_ordering(568) 00:18:19.987 fused_ordering(569) 00:18:19.987 fused_ordering(570) 00:18:19.987 fused_ordering(571) 00:18:19.987 fused_ordering(572) 00:18:19.987 fused_ordering(573) 00:18:19.987 fused_ordering(574) 00:18:19.987 fused_ordering(575) 00:18:19.987 fused_ordering(576) 00:18:19.987 fused_ordering(577) 00:18:19.987 fused_ordering(578) 00:18:19.987 fused_ordering(579) 00:18:19.987 fused_ordering(580) 00:18:19.987 fused_ordering(581) 00:18:19.987 fused_ordering(582) 00:18:19.987 fused_ordering(583) 00:18:19.987 fused_ordering(584) 00:18:19.987 fused_ordering(585) 00:18:19.987 fused_ordering(586) 00:18:19.987 fused_ordering(587) 00:18:19.987 fused_ordering(588) 00:18:19.987 fused_ordering(589) 00:18:19.987 fused_ordering(590) 00:18:19.987 fused_ordering(591) 00:18:19.987 fused_ordering(592) 00:18:19.987 fused_ordering(593) 00:18:19.987 fused_ordering(594) 00:18:19.987 fused_ordering(595) 00:18:19.987 fused_ordering(596) 00:18:19.987 fused_ordering(597) 00:18:19.987 fused_ordering(598) 00:18:19.987 fused_ordering(599) 00:18:19.987 fused_ordering(600) 00:18:19.987 fused_ordering(601) 00:18:19.987 fused_ordering(602) 00:18:19.987 fused_ordering(603) 00:18:19.987 fused_ordering(604) 00:18:19.987 fused_ordering(605) 00:18:19.987 fused_ordering(606) 00:18:19.987 fused_ordering(607) 00:18:19.987 fused_ordering(608) 00:18:19.987 fused_ordering(609) 00:18:19.987 fused_ordering(610) 00:18:19.987 fused_ordering(611) 00:18:19.987 fused_ordering(612) 00:18:19.987 fused_ordering(613) 00:18:19.987 fused_ordering(614) 00:18:19.987 fused_ordering(615) 00:18:20.559 fused_ordering(616) 00:18:20.559 fused_ordering(617) 00:18:20.559 fused_ordering(618) 00:18:20.559 fused_ordering(619) 00:18:20.559 fused_ordering(620) 00:18:20.559 fused_ordering(621) 00:18:20.559 fused_ordering(622) 00:18:20.559 fused_ordering(623) 00:18:20.559 fused_ordering(624) 00:18:20.559 fused_ordering(625) 00:18:20.559 fused_ordering(626) 00:18:20.559 fused_ordering(627) 00:18:20.559 fused_ordering(628) 00:18:20.559 fused_ordering(629) 00:18:20.559 fused_ordering(630) 00:18:20.559 fused_ordering(631) 00:18:20.559 fused_ordering(632) 00:18:20.559 fused_ordering(633) 00:18:20.559 fused_ordering(634) 00:18:20.559 fused_ordering(635) 00:18:20.559 fused_ordering(636) 00:18:20.559 fused_ordering(637) 00:18:20.559 fused_ordering(638) 00:18:20.559 fused_ordering(639) 00:18:20.559 fused_ordering(640) 00:18:20.559 fused_ordering(641) 00:18:20.559 fused_ordering(642) 00:18:20.559 fused_ordering(643) 00:18:20.559 fused_ordering(644) 00:18:20.559 fused_ordering(645) 00:18:20.559 fused_ordering(646) 00:18:20.559 fused_ordering(647) 00:18:20.559 fused_ordering(648) 00:18:20.559 fused_ordering(649) 00:18:20.559 fused_ordering(650) 00:18:20.559 fused_ordering(651) 00:18:20.559 fused_ordering(652) 00:18:20.559 fused_ordering(653) 00:18:20.559 fused_ordering(654) 00:18:20.559 fused_ordering(655) 00:18:20.559 fused_ordering(656) 00:18:20.559 fused_ordering(657) 00:18:20.559 fused_ordering(658) 00:18:20.559 fused_ordering(659) 00:18:20.559 fused_ordering(660) 00:18:20.559 fused_ordering(661) 00:18:20.559 fused_ordering(662) 00:18:20.559 fused_ordering(663) 00:18:20.559 fused_ordering(664) 00:18:20.559 fused_ordering(665) 00:18:20.559 fused_ordering(666) 00:18:20.559 fused_ordering(667) 00:18:20.559 fused_ordering(668) 00:18:20.559 fused_ordering(669) 00:18:20.559 fused_ordering(670) 00:18:20.559 fused_ordering(671) 00:18:20.559 fused_ordering(672) 00:18:20.559 fused_ordering(673) 00:18:20.559 fused_ordering(674) 00:18:20.559 fused_ordering(675) 00:18:20.559 fused_ordering(676) 00:18:20.559 fused_ordering(677) 00:18:20.559 fused_ordering(678) 00:18:20.559 fused_ordering(679) 00:18:20.559 fused_ordering(680) 00:18:20.559 fused_ordering(681) 00:18:20.559 fused_ordering(682) 00:18:20.559 fused_ordering(683) 00:18:20.559 fused_ordering(684) 00:18:20.559 fused_ordering(685) 00:18:20.559 fused_ordering(686) 00:18:20.559 fused_ordering(687) 00:18:20.559 fused_ordering(688) 00:18:20.559 fused_ordering(689) 00:18:20.559 fused_ordering(690) 00:18:20.559 fused_ordering(691) 00:18:20.559 fused_ordering(692) 00:18:20.559 fused_ordering(693) 00:18:20.559 fused_ordering(694) 00:18:20.559 fused_ordering(695) 00:18:20.559 fused_ordering(696) 00:18:20.559 fused_ordering(697) 00:18:20.559 fused_ordering(698) 00:18:20.559 fused_ordering(699) 00:18:20.559 fused_ordering(700) 00:18:20.559 fused_ordering(701) 00:18:20.559 fused_ordering(702) 00:18:20.559 fused_ordering(703) 00:18:20.559 fused_ordering(704) 00:18:20.559 fused_ordering(705) 00:18:20.559 fused_ordering(706) 00:18:20.559 fused_ordering(707) 00:18:20.559 fused_ordering(708) 00:18:20.559 fused_ordering(709) 00:18:20.559 fused_ordering(710) 00:18:20.559 fused_ordering(711) 00:18:20.559 fused_ordering(712) 00:18:20.559 fused_ordering(713) 00:18:20.559 fused_ordering(714) 00:18:20.559 fused_ordering(715) 00:18:20.559 fused_ordering(716) 00:18:20.559 fused_ordering(717) 00:18:20.559 fused_ordering(718) 00:18:20.559 fused_ordering(719) 00:18:20.559 fused_ordering(720) 00:18:20.559 fused_ordering(721) 00:18:20.559 fused_ordering(722) 00:18:20.559 fused_ordering(723) 00:18:20.559 fused_ordering(724) 00:18:20.559 fused_ordering(725) 00:18:20.559 fused_ordering(726) 00:18:20.559 fused_ordering(727) 00:18:20.559 fused_ordering(728) 00:18:20.559 fused_ordering(729) 00:18:20.559 fused_ordering(730) 00:18:20.559 fused_ordering(731) 00:18:20.559 fused_ordering(732) 00:18:20.559 fused_ordering(733) 00:18:20.559 fused_ordering(734) 00:18:20.559 fused_ordering(735) 00:18:20.559 fused_ordering(736) 00:18:20.559 fused_ordering(737) 00:18:20.559 fused_ordering(738) 00:18:20.559 fused_ordering(739) 00:18:20.559 fused_ordering(740) 00:18:20.559 fused_ordering(741) 00:18:20.559 fused_ordering(742) 00:18:20.559 fused_ordering(743) 00:18:20.559 fused_ordering(744) 00:18:20.559 fused_ordering(745) 00:18:20.559 fused_ordering(746) 00:18:20.559 fused_ordering(747) 00:18:20.559 fused_ordering(748) 00:18:20.559 fused_ordering(749) 00:18:20.559 fused_ordering(750) 00:18:20.559 fused_ordering(751) 00:18:20.559 fused_ordering(752) 00:18:20.559 fused_ordering(753) 00:18:20.559 fused_ordering(754) 00:18:20.559 fused_ordering(755) 00:18:20.559 fused_ordering(756) 00:18:20.559 fused_ordering(757) 00:18:20.559 fused_ordering(758) 00:18:20.559 fused_ordering(759) 00:18:20.559 fused_ordering(760) 00:18:20.559 fused_ordering(761) 00:18:20.559 fused_ordering(762) 00:18:20.559 fused_ordering(763) 00:18:20.559 fused_ordering(764) 00:18:20.559 fused_ordering(765) 00:18:20.559 fused_ordering(766) 00:18:20.559 fused_ordering(767) 00:18:20.559 fused_ordering(768) 00:18:20.559 fused_ordering(769) 00:18:20.559 fused_ordering(770) 00:18:20.559 fused_ordering(771) 00:18:20.559 fused_ordering(772) 00:18:20.559 fused_ordering(773) 00:18:20.559 fused_ordering(774) 00:18:20.559 fused_ordering(775) 00:18:20.559 fused_ordering(776) 00:18:20.559 fused_ordering(777) 00:18:20.559 fused_ordering(778) 00:18:20.559 fused_ordering(779) 00:18:20.559 fused_ordering(780) 00:18:20.559 fused_ordering(781) 00:18:20.559 fused_ordering(782) 00:18:20.559 fused_ordering(783) 00:18:20.559 fused_ordering(784) 00:18:20.559 fused_ordering(785) 00:18:20.559 fused_ordering(786) 00:18:20.559 fused_ordering(787) 00:18:20.559 fused_ordering(788) 00:18:20.559 fused_ordering(789) 00:18:20.559 fused_ordering(790) 00:18:20.559 fused_ordering(791) 00:18:20.559 fused_ordering(792) 00:18:20.559 fused_ordering(793) 00:18:20.559 fused_ordering(794) 00:18:20.559 fused_ordering(795) 00:18:20.559 fused_ordering(796) 00:18:20.559 fused_ordering(797) 00:18:20.559 fused_ordering(798) 00:18:20.559 fused_ordering(799) 00:18:20.559 fused_ordering(800) 00:18:20.559 fused_ordering(801) 00:18:20.559 fused_ordering(802) 00:18:20.559 fused_ordering(803) 00:18:20.559 fused_ordering(804) 00:18:20.559 fused_ordering(805) 00:18:20.559 fused_ordering(806) 00:18:20.559 fused_ordering(807) 00:18:20.559 fused_ordering(808) 00:18:20.559 fused_ordering(809) 00:18:20.559 fused_ordering(810) 00:18:20.559 fused_ordering(811) 00:18:20.559 fused_ordering(812) 00:18:20.559 fused_ordering(813) 00:18:20.559 fused_ordering(814) 00:18:20.559 fused_ordering(815) 00:18:20.559 fused_ordering(816) 00:18:20.559 fused_ordering(817) 00:18:20.559 fused_ordering(818) 00:18:20.559 fused_ordering(819) 00:18:20.559 fused_ordering(820) 00:18:21.132 fused_ordering(821) 00:18:21.132 fused_ordering(822) 00:18:21.132 fused_ordering(823) 00:18:21.132 fused_ordering(824) 00:18:21.132 fused_ordering(825) 00:18:21.132 fused_ordering(826) 00:18:21.132 fused_ordering(827) 00:18:21.132 fused_ordering(828) 00:18:21.132 fused_ordering(829) 00:18:21.132 fused_ordering(830) 00:18:21.132 fused_ordering(831) 00:18:21.132 fused_ordering(832) 00:18:21.132 fused_ordering(833) 00:18:21.132 fused_ordering(834) 00:18:21.132 fused_ordering(835) 00:18:21.132 fused_ordering(836) 00:18:21.132 fused_ordering(837) 00:18:21.132 fused_ordering(838) 00:18:21.132 fused_ordering(839) 00:18:21.132 fused_ordering(840) 00:18:21.132 fused_ordering(841) 00:18:21.132 fused_ordering(842) 00:18:21.132 fused_ordering(843) 00:18:21.132 fused_ordering(844) 00:18:21.132 fused_ordering(845) 00:18:21.132 fused_ordering(846) 00:18:21.132 fused_ordering(847) 00:18:21.132 fused_ordering(848) 00:18:21.132 fused_ordering(849) 00:18:21.132 fused_ordering(850) 00:18:21.132 fused_ordering(851) 00:18:21.132 fused_ordering(852) 00:18:21.132 fused_ordering(853) 00:18:21.132 fused_ordering(854) 00:18:21.132 fused_ordering(855) 00:18:21.132 fused_ordering(856) 00:18:21.132 fused_ordering(857) 00:18:21.132 fused_ordering(858) 00:18:21.132 fused_ordering(859) 00:18:21.132 fused_ordering(860) 00:18:21.132 fused_ordering(861) 00:18:21.132 fused_ordering(862) 00:18:21.132 fused_ordering(863) 00:18:21.132 fused_ordering(864) 00:18:21.132 fused_ordering(865) 00:18:21.132 fused_ordering(866) 00:18:21.132 fused_ordering(867) 00:18:21.132 fused_ordering(868) 00:18:21.132 fused_ordering(869) 00:18:21.132 fused_ordering(870) 00:18:21.132 fused_ordering(871) 00:18:21.132 fused_ordering(872) 00:18:21.132 fused_ordering(873) 00:18:21.132 fused_ordering(874) 00:18:21.132 fused_ordering(875) 00:18:21.132 fused_ordering(876) 00:18:21.132 fused_ordering(877) 00:18:21.132 fused_ordering(878) 00:18:21.132 fused_ordering(879) 00:18:21.132 fused_ordering(880) 00:18:21.132 fused_ordering(881) 00:18:21.132 fused_ordering(882) 00:18:21.132 fused_ordering(883) 00:18:21.132 fused_ordering(884) 00:18:21.132 fused_ordering(885) 00:18:21.132 fused_ordering(886) 00:18:21.132 fused_ordering(887) 00:18:21.132 fused_ordering(888) 00:18:21.132 fused_ordering(889) 00:18:21.132 fused_ordering(890) 00:18:21.132 fused_ordering(891) 00:18:21.132 fused_ordering(892) 00:18:21.132 fused_ordering(893) 00:18:21.132 fused_ordering(894) 00:18:21.132 fused_ordering(895) 00:18:21.132 fused_ordering(896) 00:18:21.132 fused_ordering(897) 00:18:21.132 fused_ordering(898) 00:18:21.132 fused_ordering(899) 00:18:21.132 fused_ordering(900) 00:18:21.132 fused_ordering(901) 00:18:21.132 fused_ordering(902) 00:18:21.132 fused_ordering(903) 00:18:21.132 fused_ordering(904) 00:18:21.132 fused_ordering(905) 00:18:21.132 fused_ordering(906) 00:18:21.132 fused_ordering(907) 00:18:21.132 fused_ordering(908) 00:18:21.132 fused_ordering(909) 00:18:21.132 fused_ordering(910) 00:18:21.132 fused_ordering(911) 00:18:21.132 fused_ordering(912) 00:18:21.132 fused_ordering(913) 00:18:21.132 fused_ordering(914) 00:18:21.132 fused_ordering(915) 00:18:21.132 fused_ordering(916) 00:18:21.132 fused_ordering(917) 00:18:21.132 fused_ordering(918) 00:18:21.132 fused_ordering(919) 00:18:21.132 fused_ordering(920) 00:18:21.132 fused_ordering(921) 00:18:21.132 fused_ordering(922) 00:18:21.132 fused_ordering(923) 00:18:21.132 fused_ordering(924) 00:18:21.132 fused_ordering(925) 00:18:21.132 fused_ordering(926) 00:18:21.132 fused_ordering(927) 00:18:21.132 fused_ordering(928) 00:18:21.132 fused_ordering(929) 00:18:21.132 fused_ordering(930) 00:18:21.132 fused_ordering(931) 00:18:21.132 fused_ordering(932) 00:18:21.132 fused_ordering(933) 00:18:21.132 fused_ordering(934) 00:18:21.132 fused_ordering(935) 00:18:21.132 fused_ordering(936) 00:18:21.132 fused_ordering(937) 00:18:21.132 fused_ordering(938) 00:18:21.132 fused_ordering(939) 00:18:21.132 fused_ordering(940) 00:18:21.132 fused_ordering(941) 00:18:21.132 fused_ordering(942) 00:18:21.132 fused_ordering(943) 00:18:21.132 fused_ordering(944) 00:18:21.132 fused_ordering(945) 00:18:21.132 fused_ordering(946) 00:18:21.132 fused_ordering(947) 00:18:21.132 fused_ordering(948) 00:18:21.132 fused_ordering(949) 00:18:21.132 fused_ordering(950) 00:18:21.132 fused_ordering(951) 00:18:21.132 fused_ordering(952) 00:18:21.132 fused_ordering(953) 00:18:21.132 fused_ordering(954) 00:18:21.132 fused_ordering(955) 00:18:21.132 fused_ordering(956) 00:18:21.132 fused_ordering(957) 00:18:21.132 fused_ordering(958) 00:18:21.132 fused_ordering(959) 00:18:21.132 fused_ordering(960) 00:18:21.132 fused_ordering(961) 00:18:21.132 fused_ordering(962) 00:18:21.132 fused_ordering(963) 00:18:21.132 fused_ordering(964) 00:18:21.132 fused_ordering(965) 00:18:21.132 fused_ordering(966) 00:18:21.132 fused_ordering(967) 00:18:21.132 fused_ordering(968) 00:18:21.132 fused_ordering(969) 00:18:21.132 fused_ordering(970) 00:18:21.132 fused_ordering(971) 00:18:21.132 fused_ordering(972) 00:18:21.132 fused_ordering(973) 00:18:21.132 fused_ordering(974) 00:18:21.132 fused_ordering(975) 00:18:21.132 fused_ordering(976) 00:18:21.132 fused_ordering(977) 00:18:21.132 fused_ordering(978) 00:18:21.132 fused_ordering(979) 00:18:21.132 fused_ordering(980) 00:18:21.132 fused_ordering(981) 00:18:21.132 fused_ordering(982) 00:18:21.132 fused_ordering(983) 00:18:21.132 fused_ordering(984) 00:18:21.133 fused_ordering(985) 00:18:21.133 fused_ordering(986) 00:18:21.133 fused_ordering(987) 00:18:21.133 fused_ordering(988) 00:18:21.133 fused_ordering(989) 00:18:21.133 fused_ordering(990) 00:18:21.133 fused_ordering(991) 00:18:21.133 fused_ordering(992) 00:18:21.133 fused_ordering(993) 00:18:21.133 fused_ordering(994) 00:18:21.133 fused_ordering(995) 00:18:21.133 fused_ordering(996) 00:18:21.133 fused_ordering(997) 00:18:21.133 fused_ordering(998) 00:18:21.133 fused_ordering(999) 00:18:21.133 fused_ordering(1000) 00:18:21.133 fused_ordering(1001) 00:18:21.133 fused_ordering(1002) 00:18:21.133 fused_ordering(1003) 00:18:21.133 fused_ordering(1004) 00:18:21.133 fused_ordering(1005) 00:18:21.133 fused_ordering(1006) 00:18:21.133 fused_ordering(1007) 00:18:21.133 fused_ordering(1008) 00:18:21.133 fused_ordering(1009) 00:18:21.133 fused_ordering(1010) 00:18:21.133 fused_ordering(1011) 00:18:21.133 fused_ordering(1012) 00:18:21.133 fused_ordering(1013) 00:18:21.133 fused_ordering(1014) 00:18:21.133 fused_ordering(1015) 00:18:21.133 fused_ordering(1016) 00:18:21.133 fused_ordering(1017) 00:18:21.133 fused_ordering(1018) 00:18:21.133 fused_ordering(1019) 00:18:21.133 fused_ordering(1020) 00:18:21.133 fused_ordering(1021) 00:18:21.133 fused_ordering(1022) 00:18:21.133 fused_ordering(1023) 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:21.133 rmmod nvme_tcp 00:18:21.133 rmmod nvme_fabrics 00:18:21.133 rmmod nvme_keyring 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 3100549 ']' 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 3100549 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3100549 ']' 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3100549 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.133 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3100549 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3100549' 00:18:21.395 killing process with pid 3100549 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3100549 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3100549 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.395 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:23.942 00:18:23.942 real 0m13.778s 00:18:23.942 user 0m7.118s 00:18:23.942 sys 0m7.534s 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:23.942 ************************************ 00:18:23.942 END TEST nvmf_fused_ordering 00:18:23.942 ************************************ 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.942 ************************************ 00:18:23.942 START TEST nvmf_ns_masking 00:18:23.942 ************************************ 00:18:23.942 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:23.942 * Looking for test storage... 00:18:23.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:23.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.942 --rc genhtml_branch_coverage=1 00:18:23.942 --rc genhtml_function_coverage=1 00:18:23.942 --rc genhtml_legend=1 00:18:23.942 --rc geninfo_all_blocks=1 00:18:23.942 --rc geninfo_unexecuted_blocks=1 00:18:23.942 00:18:23.942 ' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:23.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.942 --rc genhtml_branch_coverage=1 00:18:23.942 --rc genhtml_function_coverage=1 00:18:23.942 --rc genhtml_legend=1 00:18:23.942 --rc geninfo_all_blocks=1 00:18:23.942 --rc geninfo_unexecuted_blocks=1 00:18:23.942 00:18:23.942 ' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:23.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.942 --rc genhtml_branch_coverage=1 00:18:23.942 --rc genhtml_function_coverage=1 00:18:23.942 --rc genhtml_legend=1 00:18:23.942 --rc geninfo_all_blocks=1 00:18:23.942 --rc geninfo_unexecuted_blocks=1 00:18:23.942 00:18:23.942 ' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:23.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.942 --rc genhtml_branch_coverage=1 00:18:23.942 --rc genhtml_function_coverage=1 00:18:23.942 --rc genhtml_legend=1 00:18:23.942 --rc geninfo_all_blocks=1 00:18:23.942 --rc geninfo_unexecuted_blocks=1 00:18:23.942 00:18:23.942 ' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.942 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:23.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=14c5b112-3d16-4d06-af09-895b0bb0298e 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=dd4f9dfc-2c2c-4614-9062-0e46c732d358 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=73f7cc58-597b-4cd2-b54d-5dc3145d3463 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:23.943 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:32.086 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:32.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:32.086 Found net devices under 0000:31:00.0: cvl_0_0 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:32.086 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:32.087 Found net devices under 0000:31:00.1: cvl_0_1 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:18:32.087 00:18:32.087 --- 10.0.0.2 ping statistics --- 00:18:32.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.087 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:18:32.087 00:18:32.087 --- 10.0.0.1 ping statistics --- 00:18:32.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.087 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=3106173 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 3106173 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3106173 ']' 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.087 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.087 [2024-10-01 15:36:11.037958] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:32.087 [2024-10-01 15:36:11.038022] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.087 [2024-10-01 15:36:11.080665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:32.087 [2024-10-01 15:36:11.130588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.087 [2024-10-01 15:36:11.175945] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.087 [2024-10-01 15:36:11.176000] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.087 [2024-10-01 15:36:11.176010] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.087 [2024-10-01 15:36:11.176019] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.087 [2024-10-01 15:36:11.176027] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.087 [2024-10-01 15:36:11.176054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.660 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.660 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:32.660 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:32.660 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:32.660 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.660 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.660 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:32.660 [2024-10-01 15:36:12.076093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.660 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:32.660 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:32.660 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:32.921 Malloc1 00:18:32.921 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:33.182 Malloc2 00:18:33.182 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:33.442 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:33.703 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.703 [2024-10-01 15:36:13.097476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.703 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:33.703 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73f7cc58-597b-4cd2-b54d-5dc3145d3463 -a 10.0.0.2 -s 4420 -i 4 00:18:33.963 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:33.963 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:33.963 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.963 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:33.963 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:36.509 [ 0]:0x1 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8ced7d1215e4e6fb1d365aaf0f450d0 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8ced7d1215e4e6fb1d365aaf0f450d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:36.509 [ 0]:0x1 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8ced7d1215e4e6fb1d365aaf0f450d0 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8ced7d1215e4e6fb1d365aaf0f450d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:36.509 [ 1]:0x2 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb8d9257ea3c4cd99db75a2833bb90b9 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb8d9257ea3c4cd99db75a2833bb90b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.509 15:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.769 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73f7cc58-597b-4cd2-b54d-5dc3145d3463 -a 10.0.0.2 -s 4420 -i 4 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:37.030 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:39.577 [ 0]:0x2 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb8d9257ea3c4cd99db75a2833bb90b9 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb8d9257ea3c4cd99db75a2833bb90b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:39.577 [ 0]:0x1 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8ced7d1215e4e6fb1d365aaf0f450d0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8ced7d1215e4e6fb1d365aaf0f450d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:39.577 [ 1]:0x2 00:18:39.577 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.578 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.578 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb8d9257ea3c4cd99db75a2833bb90b9 00:18:39.578 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb8d9257ea3c4cd99db75a2833bb90b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.578 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:39.837 [ 0]:0x2 00:18:39.837 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.838 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.838 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb8d9257ea3c4cd99db75a2833bb90b9 00:18:39.838 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb8d9257ea3c4cd99db75a2833bb90b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.838 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:39.838 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:40.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.098 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:40.098 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:40.098 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73f7cc58-597b-4cd2-b54d-5dc3145d3463 -a 10.0.0.2 -s 4420 -i 4 00:18:40.360 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:40.360 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:40.360 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.360 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:40.360 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:40.361 15:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:42.273 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:42.534 [ 0]:0x1 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8ced7d1215e4e6fb1d365aaf0f450d0 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8ced7d1215e4e6fb1d365aaf0f450d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:42.534 [ 1]:0x2 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:42.534 15:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb8d9257ea3c4cd99db75a2833bb90b9 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb8d9257ea3c4cd99db75a2833bb90b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:42.795 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:43.056 [ 0]:0x2 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb8d9257ea3c4cd99db75a2833bb90b9 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb8d9257ea3c4cd99db75a2833bb90b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:43.056 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:43.056 [2024-10-01 15:36:22.483011] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:43.056 request: 00:18:43.056 { 00:18:43.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.056 "nsid": 2, 00:18:43.056 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.056 "method": "nvmf_ns_remove_host", 00:18:43.056 "req_id": 1 00:18:43.056 } 00:18:43.056 Got JSON-RPC error response 00:18:43.056 response: 00:18:43.056 { 00:18:43.056 "code": -32602, 00:18:43.056 "message": "Invalid parameters" 00:18:43.056 } 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:43.318 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:43.319 [ 0]:0x2 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb8d9257ea3c4cd99db75a2833bb90b9 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb8d9257ea3c4cd99db75a2833bb90b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3108598 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3108598 /var/tmp/host.sock 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3108598 ']' 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:43.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.319 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:43.319 [2024-10-01 15:36:22.751805] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:43.319 [2024-10-01 15:36:22.751859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108598 ] 00:18:43.580 [2024-10-01 15:36:22.782794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:43.580 [2024-10-01 15:36:22.833353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.580 [2024-10-01 15:36:22.863942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.152 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.152 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:44.152 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:44.413 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:44.675 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 14c5b112-3d16-4d06-af09-895b0bb0298e 00:18:44.675 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:44.675 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 14C5B1123D164D06AF09895B0BB0298E -i 00:18:44.675 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid dd4f9dfc-2c2c-4614-9062-0e46c732d358 00:18:44.675 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:44.675 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DD4F9DFC2C2C461490620E46C732D358 -i 00:18:44.937 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:45.199 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:45.199 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:45.199 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:45.460 nvme0n1 00:18:45.721 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:45.721 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:45.982 nvme1n2 00:18:45.982 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:45.982 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:45.982 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:45.982 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:45.982 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:46.243 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:46.243 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:46.243 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:46.243 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:46.243 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 14c5b112-3d16-4d06-af09-895b0bb0298e == \1\4\c\5\b\1\1\2\-\3\d\1\6\-\4\d\0\6\-\a\f\0\9\-\8\9\5\b\0\b\b\0\2\9\8\e ]] 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ dd4f9dfc-2c2c-4614-9062-0e46c732d358 == \d\d\4\f\9\d\f\c\-\2\c\2\c\-\4\6\1\4\-\9\0\6\2\-\0\e\4\6\c\7\3\2\d\3\5\8 ]] 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3108598 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3108598 ']' 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3108598 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3108598 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3108598' 00:18:46.505 killing process with pid 3108598 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3108598 00:18:46.505 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3108598 00:18:46.766 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.027 rmmod nvme_tcp 00:18:47.027 rmmod nvme_fabrics 00:18:47.027 rmmod nvme_keyring 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 3106173 ']' 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 3106173 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3106173 ']' 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3106173 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3106173 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3106173' 00:18:47.027 killing process with pid 3106173 00:18:47.027 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3106173 00:18:47.028 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3106173 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.288 15:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.838 00:18:49.838 real 0m25.764s 00:18:49.838 user 0m26.170s 00:18:49.838 sys 0m8.079s 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:49.838 ************************************ 00:18:49.838 END TEST nvmf_ns_masking 00:18:49.838 ************************************ 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.838 ************************************ 00:18:49.838 START TEST nvmf_nvme_cli 00:18:49.838 ************************************ 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:49.838 * Looking for test storage... 00:18:49.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.838 --rc genhtml_branch_coverage=1 00:18:49.838 --rc genhtml_function_coverage=1 00:18:49.838 --rc genhtml_legend=1 00:18:49.838 --rc geninfo_all_blocks=1 00:18:49.838 --rc geninfo_unexecuted_blocks=1 00:18:49.838 00:18:49.838 ' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.838 --rc genhtml_branch_coverage=1 00:18:49.838 --rc genhtml_function_coverage=1 00:18:49.838 --rc genhtml_legend=1 00:18:49.838 --rc geninfo_all_blocks=1 00:18:49.838 --rc geninfo_unexecuted_blocks=1 00:18:49.838 00:18:49.838 ' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.838 --rc genhtml_branch_coverage=1 00:18:49.838 --rc genhtml_function_coverage=1 00:18:49.838 --rc genhtml_legend=1 00:18:49.838 --rc geninfo_all_blocks=1 00:18:49.838 --rc geninfo_unexecuted_blocks=1 00:18:49.838 00:18:49.838 ' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.838 --rc genhtml_branch_coverage=1 00:18:49.838 --rc genhtml_function_coverage=1 00:18:49.838 --rc genhtml_legend=1 00:18:49.838 --rc geninfo_all_blocks=1 00:18:49.838 --rc geninfo_unexecuted_blocks=1 00:18:49.838 00:18:49.838 ' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.838 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:49.839 15:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:49.839 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:58.030 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:58.030 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:58.030 Found net devices under 0000:31:00.0: cvl_0_0 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:58.030 Found net devices under 0000:31:00.1: cvl_0_1 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.030 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:58.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:18:58.031 00:18:58.031 --- 10.0.0.2 ping statistics --- 00:18:58.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.031 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:18:58.031 00:18:58.031 --- 10.0.0.1 ping statistics --- 00:18:58.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.031 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=3113580 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 3113580 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3113580 ']' 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.031 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 [2024-10-01 15:36:36.499434] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:58.031 [2024-10-01 15:36:36.499500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.031 [2024-10-01 15:36:36.540752] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:58.031 [2024-10-01 15:36:36.565011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.031 [2024-10-01 15:36:36.595812] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.031 [2024-10-01 15:36:36.595844] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.031 [2024-10-01 15:36:36.595850] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.031 [2024-10-01 15:36:36.595854] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.031 [2024-10-01 15:36:36.595859] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.031 [2024-10-01 15:36:36.598917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.031 [2024-10-01 15:36:36.599190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.031 [2024-10-01 15:36:36.599343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.031 [2024-10-01 15:36:36.599344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 [2024-10-01 15:36:37.338644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 Malloc0 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 Malloc1 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 [2024-10-01 15:36:37.426416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.032 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:18:58.292 00:18:58.292 Discovery Log Number of Records 2, Generation counter 2 00:18:58.292 =====Discovery Log Entry 0====== 00:18:58.292 trtype: tcp 00:18:58.292 adrfam: ipv4 00:18:58.292 subtype: current discovery subsystem 00:18:58.292 treq: not required 00:18:58.292 portid: 0 00:18:58.292 trsvcid: 4420 00:18:58.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:58.292 traddr: 10.0.0.2 00:18:58.292 eflags: explicit discovery connections, duplicate discovery information 00:18:58.292 sectype: none 00:18:58.292 =====Discovery Log Entry 1====== 00:18:58.292 trtype: tcp 00:18:58.292 adrfam: ipv4 00:18:58.292 subtype: nvme subsystem 00:18:58.292 treq: not required 00:18:58.292 portid: 0 00:18:58.292 trsvcid: 4420 00:18:58.292 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:58.292 traddr: 10.0.0.2 00:18:58.292 eflags: none 00:18:58.292 sectype: none 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:58.292 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:00.204 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:00.204 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:19:00.204 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.204 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:00.204 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:00.204 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:02.116 /dev/nvme0n2 ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:02.116 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:02.376 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:02.376 rmmod nvme_tcp 00:19:02.376 rmmod nvme_fabrics 00:19:02.637 rmmod nvme_keyring 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 3113580 ']' 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 3113580 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3113580 ']' 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3113580 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3113580 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3113580' 00:19:02.637 killing process with pid 3113580 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3113580 00:19:02.637 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3113580 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.637 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.179 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:05.179 00:19:05.179 real 0m15.388s 00:19:05.179 user 0m24.089s 00:19:05.179 sys 0m6.173s 00:19:05.179 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:05.179 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:05.179 ************************************ 00:19:05.179 END TEST nvmf_nvme_cli 00:19:05.180 ************************************ 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.180 ************************************ 00:19:05.180 START TEST nvmf_vfio_user 00:19:05.180 ************************************ 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:05.180 * Looking for test storage... 00:19:05.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.180 --rc genhtml_branch_coverage=1 00:19:05.180 --rc genhtml_function_coverage=1 00:19:05.180 --rc genhtml_legend=1 00:19:05.180 --rc geninfo_all_blocks=1 00:19:05.180 --rc geninfo_unexecuted_blocks=1 00:19:05.180 00:19:05.180 ' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.180 --rc genhtml_branch_coverage=1 00:19:05.180 --rc genhtml_function_coverage=1 00:19:05.180 --rc genhtml_legend=1 00:19:05.180 --rc geninfo_all_blocks=1 00:19:05.180 --rc geninfo_unexecuted_blocks=1 00:19:05.180 00:19:05.180 ' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.180 --rc genhtml_branch_coverage=1 00:19:05.180 --rc genhtml_function_coverage=1 00:19:05.180 --rc genhtml_legend=1 00:19:05.180 --rc geninfo_all_blocks=1 00:19:05.180 --rc geninfo_unexecuted_blocks=1 00:19:05.180 00:19:05.180 ' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.180 --rc genhtml_branch_coverage=1 00:19:05.180 --rc genhtml_function_coverage=1 00:19:05.180 --rc genhtml_legend=1 00:19:05.180 --rc geninfo_all_blocks=1 00:19:05.180 --rc geninfo_unexecuted_blocks=1 00:19:05.180 00:19:05.180 ' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.180 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3115283 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3115283' 00:19:05.181 Process pid: 3115283 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3115283 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3115283 ']' 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.181 15:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:05.181 [2024-10-01 15:36:44.531525] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:05.181 [2024-10-01 15:36:44.531596] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.181 [2024-10-01 15:36:44.566754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:05.181 [2024-10-01 15:36:44.611760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.441 [2024-10-01 15:36:44.646081] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.441 [2024-10-01 15:36:44.646120] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.441 [2024-10-01 15:36:44.646126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.441 [2024-10-01 15:36:44.646131] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.441 [2024-10-01 15:36:44.646135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.441 [2024-10-01 15:36:44.646276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.441 [2024-10-01 15:36:44.646437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.441 [2024-10-01 15:36:44.646594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.441 [2024-10-01 15:36:44.646596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.011 15:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.011 15:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:06.011 15:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:06.949 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:07.208 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:07.208 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:07.208 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:07.208 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:07.208 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:07.468 Malloc1 00:19:07.468 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:07.468 15:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:07.728 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:07.987 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:07.987 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:07.987 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:07.987 Malloc2 00:19:07.987 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:08.247 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:08.507 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:08.770 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:08.770 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:08.770 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:08.770 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:08.770 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:08.770 15:36:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:08.770 [2024-10-01 15:36:48.010353] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:08.770 [2024-10-01 15:36:48.010395] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115975 ] 00:19:08.770 [2024-10-01 15:36:48.021517] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:08.770 [2024-10-01 15:36:48.037298] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:08.770 [2024-10-01 15:36:48.050119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:08.770 [2024-10-01 15:36:48.050140] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f80937ee000 00:19:08.770 [2024-10-01 15:36:48.051116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.052117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.053119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.054124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.055128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.056131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.057146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.058148] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:08.770 [2024-10-01 15:36:48.059156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:08.770 [2024-10-01 15:36:48.059164] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f80924f7000 00:19:08.770 [2024-10-01 15:36:48.060078] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:08.770 [2024-10-01 15:36:48.069535] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:08.770 [2024-10-01 15:36:48.069560] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:08.770 [2024-10-01 15:36:48.074244] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:08.770 [2024-10-01 15:36:48.074278] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:08.770 [2024-10-01 15:36:48.074342] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:08.770 [2024-10-01 15:36:48.074358] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:08.770 [2024-10-01 15:36:48.074362] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:08.770 [2024-10-01 15:36:48.075249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:08.770 [2024-10-01 15:36:48.075256] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:08.770 [2024-10-01 15:36:48.075260] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:08.770 [2024-10-01 15:36:48.076252] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:08.770 [2024-10-01 15:36:48.076258] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:08.770 [2024-10-01 15:36:48.076264] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:08.770 [2024-10-01 15:36:48.077257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:08.770 [2024-10-01 15:36:48.077263] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:08.770 [2024-10-01 15:36:48.078266] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:08.770 [2024-10-01 15:36:48.078272] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:08.770 [2024-10-01 15:36:48.078276] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:08.770 [2024-10-01 15:36:48.078280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:08.770 [2024-10-01 15:36:48.078385] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:08.770 [2024-10-01 15:36:48.078388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:08.770 [2024-10-01 15:36:48.078394] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:08.770 [2024-10-01 15:36:48.079270] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:08.770 [2024-10-01 15:36:48.080273] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:08.770 [2024-10-01 15:36:48.081280] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:08.770 [2024-10-01 15:36:48.082282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:08.770 [2024-10-01 15:36:48.082334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:08.770 [2024-10-01 15:36:48.083290] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:08.770 [2024-10-01 15:36:48.083295] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:08.770 [2024-10-01 15:36:48.083299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:08.770 [2024-10-01 15:36:48.083313] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:08.770 [2024-10-01 15:36:48.083319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:08.770 [2024-10-01 15:36:48.083330] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:08.770 [2024-10-01 15:36:48.083334] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:08.770 [2024-10-01 15:36:48.083337] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:08.771 [2024-10-01 15:36:48.083348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083389] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:08.771 [2024-10-01 15:36:48.083392] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:08.771 [2024-10-01 15:36:48.083396] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:08.771 [2024-10-01 15:36:48.083399] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:08.771 [2024-10-01 15:36:48.083403] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:08.771 [2024-10-01 15:36:48.083406] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:08.771 [2024-10-01 15:36:48.083409] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083416] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.771 [2024-10-01 15:36:48.083449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.771 [2024-10-01 15:36:48.083457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.771 [2024-10-01 15:36:48.083463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.771 [2024-10-01 15:36:48.083467] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083492] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:08.771 [2024-10-01 15:36:48.083496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083500] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083506] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083578] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:08.771 [2024-10-01 15:36:48.083581] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:08.771 [2024-10-01 15:36:48.083584] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:08.771 [2024-10-01 15:36:48.083588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083607] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:08.771 [2024-10-01 15:36:48.083614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083625] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:08.771 [2024-10-01 15:36:48.083628] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:08.771 [2024-10-01 15:36:48.083631] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:08.771 [2024-10-01 15:36:48.083635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083671] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:08.771 [2024-10-01 15:36:48.083674] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:08.771 [2024-10-01 15:36:48.083676] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:08.771 [2024-10-01 15:36:48.083680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083700] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083710] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083714] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083722] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:08.771 [2024-10-01 15:36:48.083725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:08.771 [2024-10-01 15:36:48.083729] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:08.771 [2024-10-01 15:36:48.083744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:08.771 [2024-10-01 15:36:48.083816] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:08.771 [2024-10-01 15:36:48.083819] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:08.771 [2024-10-01 15:36:48.083822] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:08.771 [2024-10-01 15:36:48.083824] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:08.771 [2024-10-01 15:36:48.083826] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:08.771 [2024-10-01 15:36:48.083831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:08.771 [2024-10-01 15:36:48.083836] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:08.771 [2024-10-01 15:36:48.083839] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:08.771 [2024-10-01 15:36:48.083842] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:08.771 [2024-10-01 15:36:48.083846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083851] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:08.771 [2024-10-01 15:36:48.083854] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:08.771 [2024-10-01 15:36:48.083856] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:08.771 [2024-10-01 15:36:48.083860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:08.771 [2024-10-01 15:36:48.083866] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:08.771 [2024-10-01 15:36:48.083869] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:08.771 [2024-10-01 15:36:48.083871] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:08.771 [2024-10-01 15:36:48.083875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:08.772 [2024-10-01 15:36:48.083880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:08.772 [2024-10-01 15:36:48.083888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:08.772 [2024-10-01 15:36:48.083900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:08.772 [2024-10-01 15:36:48.083905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:08.772 ===================================================== 00:19:08.772 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:08.772 ===================================================== 00:19:08.772 Controller Capabilities/Features 00:19:08.772 ================================ 00:19:08.772 Vendor ID: 4e58 00:19:08.772 Subsystem Vendor ID: 4e58 00:19:08.772 Serial Number: SPDK1 00:19:08.772 Model Number: SPDK bdev Controller 00:19:08.772 Firmware Version: 25.01 00:19:08.772 Recommended Arb Burst: 6 00:19:08.772 IEEE OUI Identifier: 8d 6b 50 00:19:08.772 Multi-path I/O 00:19:08.772 May have multiple subsystem ports: Yes 00:19:08.772 May have multiple controllers: Yes 00:19:08.772 Associated with SR-IOV VF: No 00:19:08.772 Max Data Transfer Size: 131072 00:19:08.772 Max Number of Namespaces: 32 00:19:08.772 Max Number of I/O Queues: 127 00:19:08.772 NVMe Specification Version (VS): 1.3 00:19:08.772 NVMe Specification Version (Identify): 1.3 00:19:08.772 Maximum Queue Entries: 256 00:19:08.772 Contiguous Queues Required: Yes 00:19:08.772 Arbitration Mechanisms Supported 00:19:08.772 Weighted Round Robin: Not Supported 00:19:08.772 Vendor Specific: Not Supported 00:19:08.772 Reset Timeout: 15000 ms 00:19:08.772 Doorbell Stride: 4 bytes 00:19:08.772 NVM Subsystem Reset: Not Supported 00:19:08.772 Command Sets Supported 00:19:08.772 NVM Command Set: Supported 00:19:08.772 Boot Partition: Not Supported 00:19:08.772 Memory Page Size Minimum: 4096 bytes 00:19:08.772 Memory Page Size Maximum: 4096 bytes 00:19:08.772 Persistent Memory Region: Not Supported 00:19:08.772 Optional Asynchronous Events Supported 00:19:08.772 Namespace Attribute Notices: Supported 00:19:08.772 Firmware Activation Notices: Not Supported 00:19:08.772 ANA Change Notices: Not Supported 00:19:08.772 PLE Aggregate Log Change Notices: Not Supported 00:19:08.772 LBA Status Info Alert Notices: Not Supported 00:19:08.772 EGE Aggregate Log Change Notices: Not Supported 00:19:08.772 Normal NVM Subsystem Shutdown event: Not Supported 00:19:08.772 Zone Descriptor Change Notices: Not Supported 00:19:08.772 Discovery Log Change Notices: Not Supported 00:19:08.772 Controller Attributes 00:19:08.772 128-bit Host Identifier: Supported 00:19:08.772 Non-Operational Permissive Mode: Not Supported 00:19:08.772 NVM Sets: Not Supported 00:19:08.772 Read Recovery Levels: Not Supported 00:19:08.772 Endurance Groups: Not Supported 00:19:08.772 Predictable Latency Mode: Not Supported 00:19:08.772 Traffic Based Keep ALive: Not Supported 00:19:08.772 Namespace Granularity: Not Supported 00:19:08.772 SQ Associations: Not Supported 00:19:08.772 UUID List: Not Supported 00:19:08.772 Multi-Domain Subsystem: Not Supported 00:19:08.772 Fixed Capacity Management: Not Supported 00:19:08.772 Variable Capacity Management: Not Supported 00:19:08.772 Delete Endurance Group: Not Supported 00:19:08.772 Delete NVM Set: Not Supported 00:19:08.772 Extended LBA Formats Supported: Not Supported 00:19:08.772 Flexible Data Placement Supported: Not Supported 00:19:08.772 00:19:08.772 Controller Memory Buffer Support 00:19:08.772 ================================ 00:19:08.772 Supported: No 00:19:08.772 00:19:08.772 Persistent Memory Region Support 00:19:08.772 ================================ 00:19:08.772 Supported: No 00:19:08.772 00:19:08.772 Admin Command Set Attributes 00:19:08.772 ============================ 00:19:08.772 Security Send/Receive: Not Supported 00:19:08.772 Format NVM: Not Supported 00:19:08.772 Firmware Activate/Download: Not Supported 00:19:08.772 Namespace Management: Not Supported 00:19:08.772 Device Self-Test: Not Supported 00:19:08.772 Directives: Not Supported 00:19:08.772 NVMe-MI: Not Supported 00:19:08.772 Virtualization Management: Not Supported 00:19:08.772 Doorbell Buffer Config: Not Supported 00:19:08.772 Get LBA Status Capability: Not Supported 00:19:08.772 Command & Feature Lockdown Capability: Not Supported 00:19:08.772 Abort Command Limit: 4 00:19:08.772 Async Event Request Limit: 4 00:19:08.772 Number of Firmware Slots: N/A 00:19:08.772 Firmware Slot 1 Read-Only: N/A 00:19:08.772 Firmware Activation Without Reset: N/A 00:19:08.772 Multiple Update Detection Support: N/A 00:19:08.772 Firmware Update Granularity: No Information Provided 00:19:08.772 Per-Namespace SMART Log: No 00:19:08.772 Asymmetric Namespace Access Log Page: Not Supported 00:19:08.772 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:08.772 Command Effects Log Page: Supported 00:19:08.772 Get Log Page Extended Data: Supported 00:19:08.772 Telemetry Log Pages: Not Supported 00:19:08.772 Persistent Event Log Pages: Not Supported 00:19:08.772 Supported Log Pages Log Page: May Support 00:19:08.772 Commands Supported & Effects Log Page: Not Supported 00:19:08.772 Feature Identifiers & Effects Log Page:May Support 00:19:08.772 NVMe-MI Commands & Effects Log Page: May Support 00:19:08.772 Data Area 4 for Telemetry Log: Not Supported 00:19:08.772 Error Log Page Entries Supported: 128 00:19:08.772 Keep Alive: Supported 00:19:08.772 Keep Alive Granularity: 10000 ms 00:19:08.772 00:19:08.772 NVM Command Set Attributes 00:19:08.772 ========================== 00:19:08.772 Submission Queue Entry Size 00:19:08.772 Max: 64 00:19:08.772 Min: 64 00:19:08.772 Completion Queue Entry Size 00:19:08.772 Max: 16 00:19:08.772 Min: 16 00:19:08.772 Number of Namespaces: 32 00:19:08.772 Compare Command: Supported 00:19:08.772 Write Uncorrectable Command: Not Supported 00:19:08.772 Dataset Management Command: Supported 00:19:08.772 Write Zeroes Command: Supported 00:19:08.772 Set Features Save Field: Not Supported 00:19:08.772 Reservations: Not Supported 00:19:08.772 Timestamp: Not Supported 00:19:08.772 Copy: Supported 00:19:08.772 Volatile Write Cache: Present 00:19:08.772 Atomic Write Unit (Normal): 1 00:19:08.772 Atomic Write Unit (PFail): 1 00:19:08.772 Atomic Compare & Write Unit: 1 00:19:08.772 Fused Compare & Write: Supported 00:19:08.772 Scatter-Gather List 00:19:08.772 SGL Command Set: Supported (Dword aligned) 00:19:08.772 SGL Keyed: Not Supported 00:19:08.772 SGL Bit Bucket Descriptor: Not Supported 00:19:08.772 SGL Metadata Pointer: Not Supported 00:19:08.772 Oversized SGL: Not Supported 00:19:08.772 SGL Metadata Address: Not Supported 00:19:08.772 SGL Offset: Not Supported 00:19:08.772 Transport SGL Data Block: Not Supported 00:19:08.772 Replay Protected Memory Block: Not Supported 00:19:08.772 00:19:08.772 Firmware Slot Information 00:19:08.772 ========================= 00:19:08.772 Active slot: 1 00:19:08.772 Slot 1 Firmware Revision: 25.01 00:19:08.772 00:19:08.772 00:19:08.772 Commands Supported and Effects 00:19:08.772 ============================== 00:19:08.772 Admin Commands 00:19:08.772 -------------- 00:19:08.772 Get Log Page (02h): Supported 00:19:08.772 Identify (06h): Supported 00:19:08.772 Abort (08h): Supported 00:19:08.772 Set Features (09h): Supported 00:19:08.772 Get Features (0Ah): Supported 00:19:08.772 Asynchronous Event Request (0Ch): Supported 00:19:08.772 Keep Alive (18h): Supported 00:19:08.772 I/O Commands 00:19:08.772 ------------ 00:19:08.772 Flush (00h): Supported LBA-Change 00:19:08.772 Write (01h): Supported LBA-Change 00:19:08.772 Read (02h): Supported 00:19:08.772 Compare (05h): Supported 00:19:08.772 Write Zeroes (08h): Supported LBA-Change 00:19:08.772 Dataset Management (09h): Supported LBA-Change 00:19:08.772 Copy (19h): Supported LBA-Change 00:19:08.772 00:19:08.772 Error Log 00:19:08.772 ========= 00:19:08.772 00:19:08.772 Arbitration 00:19:08.772 =========== 00:19:08.772 Arbitration Burst: 1 00:19:08.772 00:19:08.772 Power Management 00:19:08.772 ================ 00:19:08.772 Number of Power States: 1 00:19:08.772 Current Power State: Power State #0 00:19:08.772 Power State #0: 00:19:08.772 Max Power: 0.00 W 00:19:08.772 Non-Operational State: Operational 00:19:08.772 Entry Latency: Not Reported 00:19:08.772 Exit Latency: Not Reported 00:19:08.772 Relative Read Throughput: 0 00:19:08.772 Relative Read Latency: 0 00:19:08.772 Relative Write Throughput: 0 00:19:08.772 Relative Write Latency: 0 00:19:08.772 Idle Power: Not Reported 00:19:08.772 Active Power: Not Reported 00:19:08.772 Non-Operational Permissive Mode: Not Supported 00:19:08.772 00:19:08.772 Health Information 00:19:08.772 ================== 00:19:08.772 Critical Warnings: 00:19:08.772 Available Spare Space: OK 00:19:08.772 Temperature: OK 00:19:08.772 Device Reliability: OK 00:19:08.772 Read Only: No 00:19:08.772 Volatile Memory Backup: OK 00:19:08.773 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:08.773 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:08.773 Available Spare: 0% 00:19:08.773 Available Sp[2024-10-01 15:36:48.083976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:08.773 [2024-10-01 15:36:48.083988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:08.773 [2024-10-01 15:36:48.084007] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:08.773 [2024-10-01 15:36:48.084014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.773 [2024-10-01 15:36:48.084020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.773 [2024-10-01 15:36:48.084025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.773 [2024-10-01 15:36:48.084029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.773 [2024-10-01 15:36:48.087900] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:08.773 [2024-10-01 15:36:48.087908] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:08.773 [2024-10-01 15:36:48.088319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:08.773 [2024-10-01 15:36:48.088357] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:08.773 [2024-10-01 15:36:48.088361] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:08.773 [2024-10-01 15:36:48.089323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:08.773 [2024-10-01 15:36:48.089330] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:08.773 [2024-10-01 15:36:48.089385] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:08.773 [2024-10-01 15:36:48.090346] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:08.773 are Threshold: 0% 00:19:08.773 Life Percentage Used: 0% 00:19:08.773 Data Units Read: 0 00:19:08.773 Data Units Written: 0 00:19:08.773 Host Read Commands: 0 00:19:08.773 Host Write Commands: 0 00:19:08.773 Controller Busy Time: 0 minutes 00:19:08.773 Power Cycles: 0 00:19:08.773 Power On Hours: 0 hours 00:19:08.773 Unsafe Shutdowns: 0 00:19:08.773 Unrecoverable Media Errors: 0 00:19:08.773 Lifetime Error Log Entries: 0 00:19:08.773 Warning Temperature Time: 0 minutes 00:19:08.773 Critical Temperature Time: 0 minutes 00:19:08.773 00:19:08.773 Number of Queues 00:19:08.773 ================ 00:19:08.773 Number of I/O Submission Queues: 127 00:19:08.773 Number of I/O Completion Queues: 127 00:19:08.773 00:19:08.773 Active Namespaces 00:19:08.773 ================= 00:19:08.773 Namespace ID:1 00:19:08.773 Error Recovery Timeout: Unlimited 00:19:08.773 Command Set Identifier: NVM (00h) 00:19:08.773 Deallocate: Supported 00:19:08.773 Deallocated/Unwritten Error: Not Supported 00:19:08.773 Deallocated Read Value: Unknown 00:19:08.773 Deallocate in Write Zeroes: Not Supported 00:19:08.773 Deallocated Guard Field: 0xFFFF 00:19:08.773 Flush: Supported 00:19:08.773 Reservation: Supported 00:19:08.773 Namespace Sharing Capabilities: Multiple Controllers 00:19:08.773 Size (in LBAs): 131072 (0GiB) 00:19:08.773 Capacity (in LBAs): 131072 (0GiB) 00:19:08.773 Utilization (in LBAs): 131072 (0GiB) 00:19:08.773 NGUID: A835700349D14B498478BF81F44A1EB3 00:19:08.773 UUID: a8357003-49d1-4b49-8478-bf81f44a1eb3 00:19:08.773 Thin Provisioning: Not Supported 00:19:08.773 Per-NS Atomic Units: Yes 00:19:08.773 Atomic Boundary Size (Normal): 0 00:19:08.773 Atomic Boundary Size (PFail): 0 00:19:08.773 Atomic Boundary Offset: 0 00:19:08.773 Maximum Single Source Range Length: 65535 00:19:08.773 Maximum Copy Length: 65535 00:19:08.773 Maximum Source Range Count: 1 00:19:08.773 NGUID/EUI64 Never Reused: No 00:19:08.773 Namespace Write Protected: No 00:19:08.773 Number of LBA Formats: 1 00:19:08.773 Current LBA Format: LBA Format #00 00:19:08.773 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:08.773 00:19:08.773 15:36:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:09.033 [2024-10-01 15:36:48.265096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:14.318 Initializing NVMe Controllers 00:19:14.318 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:14.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:14.318 Initialization complete. Launching workers. 00:19:14.318 ======================================================== 00:19:14.319 Latency(us) 00:19:14.319 Device Information : IOPS MiB/s Average min max 00:19:14.319 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39977.26 156.16 3201.69 836.95 7792.29 00:19:14.319 ======================================================== 00:19:14.319 Total : 39977.26 156.16 3201.69 836.95 7792.29 00:19:14.319 00:19:14.319 [2024-10-01 15:36:53.284032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:14.319 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:14.319 [2024-10-01 15:36:53.466879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:19.607 Initializing NVMe Controllers 00:19:19.607 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:19.607 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:19.607 Initialization complete. Launching workers. 00:19:19.607 ======================================================== 00:19:19.607 Latency(us) 00:19:19.607 Device Information : IOPS MiB/s Average min max 00:19:19.607 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16030.68 62.62 7990.25 5730.45 14965.85 00:19:19.607 ======================================================== 00:19:19.607 Total : 16030.68 62.62 7990.25 5730.45 14965.85 00:19:19.607 00:19:19.607 [2024-10-01 15:36:58.506738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:19.607 15:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:19.607 [2024-10-01 15:36:58.692564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:24.905 [2024-10-01 15:37:03.770098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:24.905 Initializing NVMe Controllers 00:19:24.905 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:24.905 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:24.905 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:24.905 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:24.905 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:24.905 Initialization complete. Launching workers. 00:19:24.905 Starting thread on core 2 00:19:24.905 Starting thread on core 3 00:19:24.905 Starting thread on core 1 00:19:24.905 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:24.905 [2024-10-01 15:37:04.011261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:28.203 [2024-10-01 15:37:07.075291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:28.204 Initializing NVMe Controllers 00:19:28.204 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:28.204 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:28.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:28.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:28.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:28.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:28.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:28.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:28.204 Initialization complete. Launching workers. 00:19:28.204 Starting thread on core 1 with urgent priority queue 00:19:28.204 Starting thread on core 2 with urgent priority queue 00:19:28.204 Starting thread on core 3 with urgent priority queue 00:19:28.204 Starting thread on core 0 with urgent priority queue 00:19:28.204 SPDK bdev Controller (SPDK1 ) core 0: 13276.33 IO/s 7.53 secs/100000 ios 00:19:28.204 SPDK bdev Controller (SPDK1 ) core 1: 9377.33 IO/s 10.66 secs/100000 ios 00:19:28.204 SPDK bdev Controller (SPDK1 ) core 2: 11371.33 IO/s 8.79 secs/100000 ios 00:19:28.204 SPDK bdev Controller (SPDK1 ) core 3: 9557.67 IO/s 10.46 secs/100000 ios 00:19:28.204 ======================================================== 00:19:28.204 00:19:28.204 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:28.204 [2024-10-01 15:37:07.300586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:28.204 Initializing NVMe Controllers 00:19:28.204 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:28.204 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:28.204 Namespace ID: 1 size: 0GB 00:19:28.204 Initialization complete. 00:19:28.204 INFO: using host memory buffer for IO 00:19:28.204 Hello world! 00:19:28.204 [2024-10-01 15:37:07.333753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:28.204 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:28.204 [2024-10-01 15:37:07.558354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:29.145 Initializing NVMe Controllers 00:19:29.145 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:29.145 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:29.145 Initialization complete. Launching workers. 00:19:29.145 submit (in ns) avg, min, max = 6288.1, 2845.0, 3998416.7 00:19:29.145 complete (in ns) avg, min, max = 15749.4, 1640.0, 3998895.8 00:19:29.145 00:19:29.145 Submit histogram 00:19:29.145 ================ 00:19:29.145 Range in us Cumulative Count 00:19:29.145 2.840 - 2.853: 0.7115% ( 146) 00:19:29.145 2.853 - 2.867: 3.5086% ( 574) 00:19:29.145 2.867 - 2.880: 7.0903% ( 735) 00:19:29.145 2.880 - 2.893: 11.9682% ( 1001) 00:19:29.145 2.893 - 2.907: 17.3773% ( 1110) 00:19:29.145 2.907 - 2.920: 23.5028% ( 1257) 00:19:29.145 2.920 - 2.933: 29.8329% ( 1299) 00:19:29.145 2.933 - 2.947: 35.7585% ( 1216) 00:19:29.145 2.947 - 2.960: 41.2114% ( 1119) 00:19:29.145 2.960 - 2.973: 47.4002% ( 1270) 00:19:29.145 2.973 - 2.987: 55.0997% ( 1580) 00:19:29.145 2.987 - 3.000: 64.6947% ( 1969) 00:19:29.145 3.000 - 3.013: 75.3228% ( 2181) 00:19:29.145 3.013 - 3.027: 83.1636% ( 1609) 00:19:29.145 3.027 - 3.040: 88.7774% ( 1152) 00:19:29.145 3.040 - 3.053: 93.1826% ( 904) 00:19:29.145 3.053 - 3.067: 96.0821% ( 595) 00:19:29.145 3.067 - 3.080: 97.4075% ( 272) 00:19:29.145 3.080 - 3.093: 98.5088% ( 226) 00:19:29.145 3.093 - 3.107: 99.1570% ( 133) 00:19:29.145 3.107 - 3.120: 99.4250% ( 55) 00:19:29.145 3.120 - 3.133: 99.5029% ( 16) 00:19:29.145 3.133 - 3.147: 99.5614% ( 12) 00:19:29.145 3.147 - 3.160: 99.5760% ( 3) 00:19:29.145 3.160 - 3.173: 99.5858% ( 2) 00:19:29.145 3.173 - 3.187: 99.5907% ( 1) 00:19:29.145 3.307 - 3.320: 99.5955% ( 1) 00:19:29.145 3.440 - 3.467: 99.6102% ( 3) 00:19:29.145 3.493 - 3.520: 99.6150% ( 1) 00:19:29.145 3.573 - 3.600: 99.6199% ( 1) 00:19:29.145 3.600 - 3.627: 99.6248% ( 1) 00:19:29.145 3.627 - 3.653: 99.6296% ( 1) 00:19:29.145 3.680 - 3.707: 99.6345% ( 1) 00:19:29.145 3.707 - 3.733: 99.6443% ( 2) 00:19:29.145 3.733 - 3.760: 99.6540% ( 2) 00:19:29.145 3.813 - 3.840: 99.6589% ( 1) 00:19:29.145 3.947 - 3.973: 99.6638% ( 1) 00:19:29.145 4.027 - 4.053: 99.6686% ( 1) 00:19:29.145 4.160 - 4.187: 99.6735% ( 1) 00:19:29.145 4.693 - 4.720: 99.6784% ( 1) 00:19:29.145 4.747 - 4.773: 99.6833% ( 1) 00:19:29.145 4.987 - 5.013: 99.6881% ( 1) 00:19:29.145 5.013 - 5.040: 99.6930% ( 1) 00:19:29.145 5.200 - 5.227: 99.6979% ( 1) 00:19:29.145 5.520 - 5.547: 99.7076% ( 2) 00:19:29.145 5.627 - 5.653: 99.7125% ( 1) 00:19:29.145 5.707 - 5.733: 99.7174% ( 1) 00:19:29.145 5.760 - 5.787: 99.7222% ( 1) 00:19:29.145 5.840 - 5.867: 99.7320% ( 2) 00:19:29.145 5.893 - 5.920: 99.7369% ( 1) 00:19:29.145 6.000 - 6.027: 99.7515% ( 3) 00:19:29.145 6.053 - 6.080: 99.7563% ( 1) 00:19:29.145 6.133 - 6.160: 99.7612% ( 1) 00:19:29.145 6.213 - 6.240: 99.7661% ( 1) 00:19:29.145 6.240 - 6.267: 99.7710% ( 1) 00:19:29.145 6.267 - 6.293: 99.7905% ( 4) 00:19:29.145 6.347 - 6.373: 99.8002% ( 2) 00:19:29.145 6.373 - 6.400: 99.8100% ( 2) 00:19:29.145 6.400 - 6.427: 99.8197% ( 2) 00:19:29.145 6.427 - 6.453: 99.8246% ( 1) 00:19:29.145 6.453 - 6.480: 99.8294% ( 1) 00:19:29.145 6.480 - 6.507: 99.8343% ( 1) 00:19:29.145 6.507 - 6.533: 99.8392% ( 1) 00:19:29.145 6.533 - 6.560: 99.8441% ( 1) 00:19:29.145 6.613 - 6.640: 99.8489% ( 1) 00:19:29.145 6.720 - 6.747: 99.8538% ( 1) 00:19:29.145 6.747 - 6.773: 99.8587% ( 1) 00:19:29.145 6.773 - 6.800: 99.8636% ( 1) 00:19:29.145 6.827 - 6.880: 99.8684% ( 1) 00:19:29.145 6.880 - 6.933: 99.8782% ( 2) 00:19:29.145 7.040 - 7.093: 99.8879% ( 2) 00:19:29.145 7.253 - 7.307: 99.8928% ( 1) 00:19:29.145 7.307 - 7.360: 99.9025% ( 2) 00:19:29.145 7.573 - 7.627: 99.9074% ( 1) 00:19:29.145 7.840 - 7.893: 99.9123% ( 1) 00:19:29.145 7.893 - 7.947: 99.9172% ( 1) 00:19:29.145 3986.773 - 4014.080: 100.0000% ( 17) 00:19:29.145 00:19:29.145 Complete histogram 00:19:29.145 ================== 00:19:29.145 Range in us Cumulative Count 00:19:29.145 1.640 - [2024-10-01 15:37:08.578974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:29.406 1.647: 0.0049% ( 1) 00:19:29.406 1.647 - 1.653: 0.5848% ( 119) 00:19:29.406 1.653 - 1.660: 1.2767% ( 142) 00:19:29.406 1.660 - 1.667: 1.3401% ( 13) 00:19:29.406 1.667 - 1.673: 1.4668% ( 26) 00:19:29.406 1.673 - 1.680: 1.5155% ( 10) 00:19:29.406 1.680 - 1.687: 1.5691% ( 11) 00:19:29.406 1.687 - 1.693: 1.5935% ( 5) 00:19:29.406 1.693 - 1.700: 2.7240% ( 232) 00:19:29.406 1.700 - 1.707: 29.6672% ( 5529) 00:19:29.406 1.707 - 1.720: 58.4669% ( 5910) 00:19:29.406 1.720 - 1.733: 74.7186% ( 3335) 00:19:29.406 1.733 - 1.747: 82.5496% ( 1607) 00:19:29.406 1.747 - 1.760: 84.0846% ( 315) 00:19:29.406 1.760 - 1.773: 87.5835% ( 718) 00:19:29.406 1.773 - 1.787: 92.8317% ( 1077) 00:19:29.406 1.787 - 1.800: 96.8520% ( 825) 00:19:29.406 1.800 - 1.813: 98.6794% ( 375) 00:19:29.406 1.813 - 1.827: 99.2203% ( 111) 00:19:29.406 1.827 - 1.840: 99.3665% ( 30) 00:19:29.406 1.840 - 1.853: 99.3957% ( 6) 00:19:29.406 1.867 - 1.880: 99.4006% ( 1) 00:19:29.406 1.893 - 1.907: 99.4055% ( 1) 00:19:29.406 1.947 - 1.960: 99.4104% ( 1) 00:19:29.406 1.960 - 1.973: 99.4152% ( 1) 00:19:29.406 1.987 - 2.000: 99.4250% ( 2) 00:19:29.406 2.000 - 2.013: 99.4299% ( 1) 00:19:29.406 2.013 - 2.027: 99.4347% ( 1) 00:19:29.406 2.040 - 2.053: 99.4396% ( 1) 00:19:29.406 2.067 - 2.080: 99.4445% ( 1) 00:19:29.406 2.120 - 2.133: 99.4493% ( 1) 00:19:29.406 2.200 - 2.213: 99.4542% ( 1) 00:19:29.406 2.240 - 2.253: 99.4591% ( 1) 00:19:29.406 4.187 - 4.213: 99.4640% ( 1) 00:19:29.406 4.427 - 4.453: 99.4688% ( 1) 00:19:29.406 4.533 - 4.560: 99.4786% ( 2) 00:19:29.406 4.560 - 4.587: 99.4835% ( 1) 00:19:29.406 4.613 - 4.640: 99.4883% ( 1) 00:19:29.406 4.640 - 4.667: 99.4932% ( 1) 00:19:29.406 4.720 - 4.747: 99.4981% ( 1) 00:19:29.406 4.747 - 4.773: 99.5029% ( 1) 00:19:29.406 4.800 - 4.827: 99.5078% ( 1) 00:19:29.406 4.880 - 4.907: 99.5176% ( 2) 00:19:29.406 4.933 - 4.960: 99.5224% ( 1) 00:19:29.406 4.960 - 4.987: 99.5273% ( 1) 00:19:29.406 4.987 - 5.013: 99.5322% ( 1) 00:19:29.406 5.013 - 5.040: 99.5371% ( 1) 00:19:29.406 5.147 - 5.173: 99.5419% ( 1) 00:19:29.406 5.173 - 5.200: 99.5468% ( 1) 00:19:29.406 5.200 - 5.227: 99.5517% ( 1) 00:19:29.406 5.253 - 5.280: 99.5663% ( 3) 00:19:29.406 5.307 - 5.333: 99.5712% ( 1) 00:19:29.406 5.333 - 5.360: 99.5809% ( 2) 00:19:29.406 5.360 - 5.387: 99.5858% ( 1) 00:19:29.406 5.387 - 5.413: 99.5907% ( 1) 00:19:29.406 5.573 - 5.600: 99.5955% ( 1) 00:19:29.406 5.600 - 5.627: 99.6004% ( 1) 00:19:29.406 5.627 - 5.653: 99.6053% ( 1) 00:19:29.406 5.733 - 5.760: 99.6102% ( 1) 00:19:29.406 5.840 - 5.867: 99.6150% ( 1) 00:19:29.406 6.453 - 6.480: 99.6199% ( 1) 00:19:29.406 6.693 - 6.720: 99.6248% ( 1) 00:19:29.406 8.907 - 8.960: 99.6296% ( 1) 00:19:29.406 9.387 - 9.440: 99.6345% ( 1) 00:19:29.406 11.840 - 11.893: 99.6394% ( 1) 00:19:29.406 12.107 - 12.160: 99.6443% ( 1) 00:19:29.407 12.480 - 12.533: 99.6491% ( 1) 00:19:29.407 3986.773 - 4014.080: 100.0000% ( 72) 00:19:29.407 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:29.407 [ 00:19:29.407 { 00:19:29.407 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:29.407 "subtype": "Discovery", 00:19:29.407 "listen_addresses": [], 00:19:29.407 "allow_any_host": true, 00:19:29.407 "hosts": [] 00:19:29.407 }, 00:19:29.407 { 00:19:29.407 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:29.407 "subtype": "NVMe", 00:19:29.407 "listen_addresses": [ 00:19:29.407 { 00:19:29.407 "trtype": "VFIOUSER", 00:19:29.407 "adrfam": "IPv4", 00:19:29.407 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:29.407 "trsvcid": "0" 00:19:29.407 } 00:19:29.407 ], 00:19:29.407 "allow_any_host": true, 00:19:29.407 "hosts": [], 00:19:29.407 "serial_number": "SPDK1", 00:19:29.407 "model_number": "SPDK bdev Controller", 00:19:29.407 "max_namespaces": 32, 00:19:29.407 "min_cntlid": 1, 00:19:29.407 "max_cntlid": 65519, 00:19:29.407 "namespaces": [ 00:19:29.407 { 00:19:29.407 "nsid": 1, 00:19:29.407 "bdev_name": "Malloc1", 00:19:29.407 "name": "Malloc1", 00:19:29.407 "nguid": "A835700349D14B498478BF81F44A1EB3", 00:19:29.407 "uuid": "a8357003-49d1-4b49-8478-bf81f44a1eb3" 00:19:29.407 } 00:19:29.407 ] 00:19:29.407 }, 00:19:29.407 { 00:19:29.407 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:29.407 "subtype": "NVMe", 00:19:29.407 "listen_addresses": [ 00:19:29.407 { 00:19:29.407 "trtype": "VFIOUSER", 00:19:29.407 "adrfam": "IPv4", 00:19:29.407 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:29.407 "trsvcid": "0" 00:19:29.407 } 00:19:29.407 ], 00:19:29.407 "allow_any_host": true, 00:19:29.407 "hosts": [], 00:19:29.407 "serial_number": "SPDK2", 00:19:29.407 "model_number": "SPDK bdev Controller", 00:19:29.407 "max_namespaces": 32, 00:19:29.407 "min_cntlid": 1, 00:19:29.407 "max_cntlid": 65519, 00:19:29.407 "namespaces": [ 00:19:29.407 { 00:19:29.407 "nsid": 1, 00:19:29.407 "bdev_name": "Malloc2", 00:19:29.407 "name": "Malloc2", 00:19:29.407 "nguid": "DA7C86E21285448C8B9DB8A6D0C00F79", 00:19:29.407 "uuid": "da7c86e2-1285-448c-8b9d-b8a6d0c00f79" 00:19:29.407 } 00:19:29.407 ] 00:19:29.407 } 00:19:29.407 ] 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3119996 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:29.407 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:29.667 [2024-10-01 15:37:08.933246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:29.667 Malloc3 00:19:29.667 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:29.928 [2024-10-01 15:37:09.151758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:29.928 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:29.928 Asynchronous Event Request test 00:19:29.928 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:29.928 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:29.928 Registering asynchronous event callbacks... 00:19:29.928 Starting namespace attribute notice tests for all controllers... 00:19:29.928 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:29.928 aer_cb - Changed Namespace 00:19:29.928 Cleaning up... 00:19:29.928 [ 00:19:29.928 { 00:19:29.928 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:29.928 "subtype": "Discovery", 00:19:29.928 "listen_addresses": [], 00:19:29.928 "allow_any_host": true, 00:19:29.928 "hosts": [] 00:19:29.928 }, 00:19:29.928 { 00:19:29.928 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:29.928 "subtype": "NVMe", 00:19:29.928 "listen_addresses": [ 00:19:29.928 { 00:19:29.928 "trtype": "VFIOUSER", 00:19:29.928 "adrfam": "IPv4", 00:19:29.928 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:29.928 "trsvcid": "0" 00:19:29.928 } 00:19:29.928 ], 00:19:29.928 "allow_any_host": true, 00:19:29.928 "hosts": [], 00:19:29.928 "serial_number": "SPDK1", 00:19:29.928 "model_number": "SPDK bdev Controller", 00:19:29.928 "max_namespaces": 32, 00:19:29.928 "min_cntlid": 1, 00:19:29.928 "max_cntlid": 65519, 00:19:29.928 "namespaces": [ 00:19:29.928 { 00:19:29.928 "nsid": 1, 00:19:29.928 "bdev_name": "Malloc1", 00:19:29.928 "name": "Malloc1", 00:19:29.928 "nguid": "A835700349D14B498478BF81F44A1EB3", 00:19:29.928 "uuid": "a8357003-49d1-4b49-8478-bf81f44a1eb3" 00:19:29.928 }, 00:19:29.928 { 00:19:29.928 "nsid": 2, 00:19:29.928 "bdev_name": "Malloc3", 00:19:29.928 "name": "Malloc3", 00:19:29.928 "nguid": "A3A206C6AE3A4D35B7C600FBAA5C0267", 00:19:29.928 "uuid": "a3a206c6-ae3a-4d35-b7c6-00fbaa5c0267" 00:19:29.928 } 00:19:29.928 ] 00:19:29.928 }, 00:19:29.928 { 00:19:29.928 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:29.928 "subtype": "NVMe", 00:19:29.928 "listen_addresses": [ 00:19:29.928 { 00:19:29.928 "trtype": "VFIOUSER", 00:19:29.928 "adrfam": "IPv4", 00:19:29.928 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:29.928 "trsvcid": "0" 00:19:29.928 } 00:19:29.928 ], 00:19:29.928 "allow_any_host": true, 00:19:29.928 "hosts": [], 00:19:29.928 "serial_number": "SPDK2", 00:19:29.928 "model_number": "SPDK bdev Controller", 00:19:29.928 "max_namespaces": 32, 00:19:29.928 "min_cntlid": 1, 00:19:29.928 "max_cntlid": 65519, 00:19:29.928 "namespaces": [ 00:19:29.928 { 00:19:29.928 "nsid": 1, 00:19:29.928 "bdev_name": "Malloc2", 00:19:29.928 "name": "Malloc2", 00:19:29.928 "nguid": "DA7C86E21285448C8B9DB8A6D0C00F79", 00:19:29.928 "uuid": "da7c86e2-1285-448c-8b9d-b8a6d0c00f79" 00:19:29.928 } 00:19:29.928 ] 00:19:29.928 } 00:19:29.928 ] 00:19:29.928 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3119996 00:19:29.928 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:29.928 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:29.928 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:29.928 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:29.928 [2024-10-01 15:37:09.381274] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:29.928 [2024-10-01 15:37:09.381320] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120032 ] 00:19:30.193 [2024-10-01 15:37:09.395192] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:30.193 [2024-10-01 15:37:09.410939] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:30.193 [2024-10-01 15:37:09.421094] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:30.193 [2024-10-01 15:37:09.421110] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3dadd80000 00:19:30.193 [2024-10-01 15:37:09.422099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.423109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.424112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.425119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.426131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.427136] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.428140] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.429145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:30.193 [2024-10-01 15:37:09.430153] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:30.193 [2024-10-01 15:37:09.430160] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3daca89000 00:19:30.193 [2024-10-01 15:37:09.431081] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:30.193 [2024-10-01 15:37:09.443451] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:30.193 [2024-10-01 15:37:09.443474] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:30.193 [2024-10-01 15:37:09.445521] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:30.193 [2024-10-01 15:37:09.445553] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:30.193 [2024-10-01 15:37:09.445610] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:30.193 [2024-10-01 15:37:09.445622] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:30.193 [2024-10-01 15:37:09.445626] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:30.193 [2024-10-01 15:37:09.446522] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:30.193 [2024-10-01 15:37:09.446529] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:30.193 [2024-10-01 15:37:09.446534] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:30.193 [2024-10-01 15:37:09.447529] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:30.193 [2024-10-01 15:37:09.447536] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:30.193 [2024-10-01 15:37:09.447542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:30.193 [2024-10-01 15:37:09.448534] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:30.193 [2024-10-01 15:37:09.448541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:30.193 [2024-10-01 15:37:09.449537] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:30.193 [2024-10-01 15:37:09.449544] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:30.193 [2024-10-01 15:37:09.449548] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:30.193 [2024-10-01 15:37:09.449553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:30.193 [2024-10-01 15:37:09.449657] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:30.193 [2024-10-01 15:37:09.449660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:30.193 [2024-10-01 15:37:09.449664] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:30.193 [2024-10-01 15:37:09.450546] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:30.193 [2024-10-01 15:37:09.451548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:30.193 [2024-10-01 15:37:09.452558] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:30.193 [2024-10-01 15:37:09.453559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:30.193 [2024-10-01 15:37:09.453593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:30.193 [2024-10-01 15:37:09.454565] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:30.193 [2024-10-01 15:37:09.454572] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:30.193 [2024-10-01 15:37:09.454575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:30.193 [2024-10-01 15:37:09.454590] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:30.193 [2024-10-01 15:37:09.454598] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:30.193 [2024-10-01 15:37:09.454607] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:30.193 [2024-10-01 15:37:09.454611] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:30.193 [2024-10-01 15:37:09.454613] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:30.193 [2024-10-01 15:37:09.454623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:30.193 [2024-10-01 15:37:09.460901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:30.193 [2024-10-01 15:37:09.460910] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:30.193 [2024-10-01 15:37:09.460914] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:30.193 [2024-10-01 15:37:09.460917] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:30.193 [2024-10-01 15:37:09.460921] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:30.193 [2024-10-01 15:37:09.460924] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:30.193 [2024-10-01 15:37:09.460928] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:30.193 [2024-10-01 15:37:09.460931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.460937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.460945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.468899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.468909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.194 [2024-10-01 15:37:09.468915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.194 [2024-10-01 15:37:09.468921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.194 [2024-10-01 15:37:09.468927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.194 [2024-10-01 15:37:09.468931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.468940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.468946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.476900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.476907] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:30.194 [2024-10-01 15:37:09.476911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.476916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.476922] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.476928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.484900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.484947] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.484953] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.484958] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:30.194 [2024-10-01 15:37:09.484961] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:30.194 [2024-10-01 15:37:09.484964] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:30.194 [2024-10-01 15:37:09.484969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.492900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.492910] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:30.194 [2024-10-01 15:37:09.492917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.492923] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.492928] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:30.194 [2024-10-01 15:37:09.492931] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:30.194 [2024-10-01 15:37:09.492934] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:30.194 [2024-10-01 15:37:09.492938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.500900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.500911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.500919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.500924] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:30.194 [2024-10-01 15:37:09.500927] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:30.194 [2024-10-01 15:37:09.500930] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:30.194 [2024-10-01 15:37:09.500935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.508899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.508907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.508911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.508918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.508922] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.508925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.508929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.508933] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:30.194 [2024-10-01 15:37:09.508936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:30.194 [2024-10-01 15:37:09.508940] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:30.194 [2024-10-01 15:37:09.508953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.516898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.516908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.524897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.524908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.532899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.532910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.540899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:30.194 [2024-10-01 15:37:09.540914] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:30.194 [2024-10-01 15:37:09.540917] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:30.194 [2024-10-01 15:37:09.540920] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:30.194 [2024-10-01 15:37:09.540924] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:30.194 [2024-10-01 15:37:09.540927] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:30.194 [2024-10-01 15:37:09.540932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:30.194 [2024-10-01 15:37:09.540937] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:30.194 [2024-10-01 15:37:09.540940] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:30.194 [2024-10-01 15:37:09.540943] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:30.194 [2024-10-01 15:37:09.540947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.540953] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:30.194 [2024-10-01 15:37:09.540956] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:30.194 [2024-10-01 15:37:09.540958] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:30.194 [2024-10-01 15:37:09.540962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:30.194 [2024-10-01 15:37:09.540968] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:30.194 [2024-10-01 15:37:09.540971] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:30.194 [2024-10-01 15:37:09.540973] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:30.195 [2024-10-01 15:37:09.540977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:30.195 [2024-10-01 15:37:09.548898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:30.195 [2024-10-01 15:37:09.548909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:30.195 [2024-10-01 15:37:09.548917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:30.195 [2024-10-01 15:37:09.548922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:30.195 ===================================================== 00:19:30.195 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:30.195 ===================================================== 00:19:30.195 Controller Capabilities/Features 00:19:30.195 ================================ 00:19:30.195 Vendor ID: 4e58 00:19:30.195 Subsystem Vendor ID: 4e58 00:19:30.195 Serial Number: SPDK2 00:19:30.195 Model Number: SPDK bdev Controller 00:19:30.195 Firmware Version: 25.01 00:19:30.195 Recommended Arb Burst: 6 00:19:30.195 IEEE OUI Identifier: 8d 6b 50 00:19:30.195 Multi-path I/O 00:19:30.195 May have multiple subsystem ports: Yes 00:19:30.195 May have multiple controllers: Yes 00:19:30.195 Associated with SR-IOV VF: No 00:19:30.195 Max Data Transfer Size: 131072 00:19:30.195 Max Number of Namespaces: 32 00:19:30.195 Max Number of I/O Queues: 127 00:19:30.195 NVMe Specification Version (VS): 1.3 00:19:30.195 NVMe Specification Version (Identify): 1.3 00:19:30.195 Maximum Queue Entries: 256 00:19:30.195 Contiguous Queues Required: Yes 00:19:30.195 Arbitration Mechanisms Supported 00:19:30.195 Weighted Round Robin: Not Supported 00:19:30.195 Vendor Specific: Not Supported 00:19:30.195 Reset Timeout: 15000 ms 00:19:30.195 Doorbell Stride: 4 bytes 00:19:30.195 NVM Subsystem Reset: Not Supported 00:19:30.195 Command Sets Supported 00:19:30.195 NVM Command Set: Supported 00:19:30.195 Boot Partition: Not Supported 00:19:30.195 Memory Page Size Minimum: 4096 bytes 00:19:30.195 Memory Page Size Maximum: 4096 bytes 00:19:30.195 Persistent Memory Region: Not Supported 00:19:30.195 Optional Asynchronous Events Supported 00:19:30.195 Namespace Attribute Notices: Supported 00:19:30.195 Firmware Activation Notices: Not Supported 00:19:30.195 ANA Change Notices: Not Supported 00:19:30.195 PLE Aggregate Log Change Notices: Not Supported 00:19:30.195 LBA Status Info Alert Notices: Not Supported 00:19:30.195 EGE Aggregate Log Change Notices: Not Supported 00:19:30.195 Normal NVM Subsystem Shutdown event: Not Supported 00:19:30.195 Zone Descriptor Change Notices: Not Supported 00:19:30.195 Discovery Log Change Notices: Not Supported 00:19:30.195 Controller Attributes 00:19:30.195 128-bit Host Identifier: Supported 00:19:30.195 Non-Operational Permissive Mode: Not Supported 00:19:30.195 NVM Sets: Not Supported 00:19:30.195 Read Recovery Levels: Not Supported 00:19:30.195 Endurance Groups: Not Supported 00:19:30.195 Predictable Latency Mode: Not Supported 00:19:30.195 Traffic Based Keep ALive: Not Supported 00:19:30.195 Namespace Granularity: Not Supported 00:19:30.195 SQ Associations: Not Supported 00:19:30.195 UUID List: Not Supported 00:19:30.195 Multi-Domain Subsystem: Not Supported 00:19:30.195 Fixed Capacity Management: Not Supported 00:19:30.195 Variable Capacity Management: Not Supported 00:19:30.195 Delete Endurance Group: Not Supported 00:19:30.195 Delete NVM Set: Not Supported 00:19:30.195 Extended LBA Formats Supported: Not Supported 00:19:30.195 Flexible Data Placement Supported: Not Supported 00:19:30.195 00:19:30.195 Controller Memory Buffer Support 00:19:30.195 ================================ 00:19:30.195 Supported: No 00:19:30.195 00:19:30.195 Persistent Memory Region Support 00:19:30.195 ================================ 00:19:30.195 Supported: No 00:19:30.195 00:19:30.195 Admin Command Set Attributes 00:19:30.195 ============================ 00:19:30.195 Security Send/Receive: Not Supported 00:19:30.195 Format NVM: Not Supported 00:19:30.195 Firmware Activate/Download: Not Supported 00:19:30.195 Namespace Management: Not Supported 00:19:30.195 Device Self-Test: Not Supported 00:19:30.195 Directives: Not Supported 00:19:30.195 NVMe-MI: Not Supported 00:19:30.195 Virtualization Management: Not Supported 00:19:30.195 Doorbell Buffer Config: Not Supported 00:19:30.195 Get LBA Status Capability: Not Supported 00:19:30.195 Command & Feature Lockdown Capability: Not Supported 00:19:30.195 Abort Command Limit: 4 00:19:30.195 Async Event Request Limit: 4 00:19:30.195 Number of Firmware Slots: N/A 00:19:30.195 Firmware Slot 1 Read-Only: N/A 00:19:30.195 Firmware Activation Without Reset: N/A 00:19:30.195 Multiple Update Detection Support: N/A 00:19:30.195 Firmware Update Granularity: No Information Provided 00:19:30.195 Per-Namespace SMART Log: No 00:19:30.195 Asymmetric Namespace Access Log Page: Not Supported 00:19:30.195 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:30.195 Command Effects Log Page: Supported 00:19:30.195 Get Log Page Extended Data: Supported 00:19:30.195 Telemetry Log Pages: Not Supported 00:19:30.195 Persistent Event Log Pages: Not Supported 00:19:30.195 Supported Log Pages Log Page: May Support 00:19:30.195 Commands Supported & Effects Log Page: Not Supported 00:19:30.195 Feature Identifiers & Effects Log Page:May Support 00:19:30.195 NVMe-MI Commands & Effects Log Page: May Support 00:19:30.195 Data Area 4 for Telemetry Log: Not Supported 00:19:30.195 Error Log Page Entries Supported: 128 00:19:30.195 Keep Alive: Supported 00:19:30.195 Keep Alive Granularity: 10000 ms 00:19:30.195 00:19:30.195 NVM Command Set Attributes 00:19:30.195 ========================== 00:19:30.195 Submission Queue Entry Size 00:19:30.195 Max: 64 00:19:30.195 Min: 64 00:19:30.195 Completion Queue Entry Size 00:19:30.195 Max: 16 00:19:30.195 Min: 16 00:19:30.195 Number of Namespaces: 32 00:19:30.195 Compare Command: Supported 00:19:30.195 Write Uncorrectable Command: Not Supported 00:19:30.195 Dataset Management Command: Supported 00:19:30.195 Write Zeroes Command: Supported 00:19:30.195 Set Features Save Field: Not Supported 00:19:30.195 Reservations: Not Supported 00:19:30.195 Timestamp: Not Supported 00:19:30.195 Copy: Supported 00:19:30.195 Volatile Write Cache: Present 00:19:30.195 Atomic Write Unit (Normal): 1 00:19:30.195 Atomic Write Unit (PFail): 1 00:19:30.195 Atomic Compare & Write Unit: 1 00:19:30.195 Fused Compare & Write: Supported 00:19:30.195 Scatter-Gather List 00:19:30.195 SGL Command Set: Supported (Dword aligned) 00:19:30.195 SGL Keyed: Not Supported 00:19:30.195 SGL Bit Bucket Descriptor: Not Supported 00:19:30.195 SGL Metadata Pointer: Not Supported 00:19:30.195 Oversized SGL: Not Supported 00:19:30.195 SGL Metadata Address: Not Supported 00:19:30.195 SGL Offset: Not Supported 00:19:30.195 Transport SGL Data Block: Not Supported 00:19:30.195 Replay Protected Memory Block: Not Supported 00:19:30.195 00:19:30.195 Firmware Slot Information 00:19:30.195 ========================= 00:19:30.195 Active slot: 1 00:19:30.195 Slot 1 Firmware Revision: 25.01 00:19:30.195 00:19:30.195 00:19:30.195 Commands Supported and Effects 00:19:30.195 ============================== 00:19:30.195 Admin Commands 00:19:30.195 -------------- 00:19:30.195 Get Log Page (02h): Supported 00:19:30.195 Identify (06h): Supported 00:19:30.195 Abort (08h): Supported 00:19:30.195 Set Features (09h): Supported 00:19:30.195 Get Features (0Ah): Supported 00:19:30.195 Asynchronous Event Request (0Ch): Supported 00:19:30.195 Keep Alive (18h): Supported 00:19:30.195 I/O Commands 00:19:30.195 ------------ 00:19:30.195 Flush (00h): Supported LBA-Change 00:19:30.195 Write (01h): Supported LBA-Change 00:19:30.195 Read (02h): Supported 00:19:30.195 Compare (05h): Supported 00:19:30.195 Write Zeroes (08h): Supported LBA-Change 00:19:30.195 Dataset Management (09h): Supported LBA-Change 00:19:30.195 Copy (19h): Supported LBA-Change 00:19:30.195 00:19:30.195 Error Log 00:19:30.195 ========= 00:19:30.195 00:19:30.195 Arbitration 00:19:30.195 =========== 00:19:30.195 Arbitration Burst: 1 00:19:30.195 00:19:30.195 Power Management 00:19:30.195 ================ 00:19:30.195 Number of Power States: 1 00:19:30.195 Current Power State: Power State #0 00:19:30.195 Power State #0: 00:19:30.195 Max Power: 0.00 W 00:19:30.195 Non-Operational State: Operational 00:19:30.195 Entry Latency: Not Reported 00:19:30.195 Exit Latency: Not Reported 00:19:30.195 Relative Read Throughput: 0 00:19:30.195 Relative Read Latency: 0 00:19:30.195 Relative Write Throughput: 0 00:19:30.196 Relative Write Latency: 0 00:19:30.196 Idle Power: Not Reported 00:19:30.196 Active Power: Not Reported 00:19:30.196 Non-Operational Permissive Mode: Not Supported 00:19:30.196 00:19:30.196 Health Information 00:19:30.196 ================== 00:19:30.196 Critical Warnings: 00:19:30.196 Available Spare Space: OK 00:19:30.196 Temperature: OK 00:19:30.196 Device Reliability: OK 00:19:30.196 Read Only: No 00:19:30.196 Volatile Memory Backup: OK 00:19:30.196 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:30.196 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:30.196 Available Spare: 0% 00:19:30.196 Available Sp[2024-10-01 15:37:09.548990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:30.196 [2024-10-01 15:37:09.556900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:30.196 [2024-10-01 15:37:09.556923] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:30.196 [2024-10-01 15:37:09.556929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.196 [2024-10-01 15:37:09.556934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.196 [2024-10-01 15:37:09.556939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.196 [2024-10-01 15:37:09.556943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.196 [2024-10-01 15:37:09.556978] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:30.196 [2024-10-01 15:37:09.556986] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:30.196 [2024-10-01 15:37:09.557984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:30.196 [2024-10-01 15:37:09.558021] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:30.196 [2024-10-01 15:37:09.558026] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:30.196 [2024-10-01 15:37:09.558989] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:30.196 [2024-10-01 15:37:09.558998] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:30.196 [2024-10-01 15:37:09.559044] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:30.196 [2024-10-01 15:37:09.561902] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:30.196 are Threshold: 0% 00:19:30.196 Life Percentage Used: 0% 00:19:30.196 Data Units Read: 0 00:19:30.196 Data Units Written: 0 00:19:30.196 Host Read Commands: 0 00:19:30.196 Host Write Commands: 0 00:19:30.196 Controller Busy Time: 0 minutes 00:19:30.196 Power Cycles: 0 00:19:30.196 Power On Hours: 0 hours 00:19:30.196 Unsafe Shutdowns: 0 00:19:30.196 Unrecoverable Media Errors: 0 00:19:30.196 Lifetime Error Log Entries: 0 00:19:30.196 Warning Temperature Time: 0 minutes 00:19:30.196 Critical Temperature Time: 0 minutes 00:19:30.196 00:19:30.196 Number of Queues 00:19:30.196 ================ 00:19:30.196 Number of I/O Submission Queues: 127 00:19:30.196 Number of I/O Completion Queues: 127 00:19:30.196 00:19:30.196 Active Namespaces 00:19:30.196 ================= 00:19:30.196 Namespace ID:1 00:19:30.196 Error Recovery Timeout: Unlimited 00:19:30.196 Command Set Identifier: NVM (00h) 00:19:30.196 Deallocate: Supported 00:19:30.196 Deallocated/Unwritten Error: Not Supported 00:19:30.196 Deallocated Read Value: Unknown 00:19:30.196 Deallocate in Write Zeroes: Not Supported 00:19:30.196 Deallocated Guard Field: 0xFFFF 00:19:30.196 Flush: Supported 00:19:30.196 Reservation: Supported 00:19:30.196 Namespace Sharing Capabilities: Multiple Controllers 00:19:30.196 Size (in LBAs): 131072 (0GiB) 00:19:30.196 Capacity (in LBAs): 131072 (0GiB) 00:19:30.196 Utilization (in LBAs): 131072 (0GiB) 00:19:30.196 NGUID: DA7C86E21285448C8B9DB8A6D0C00F79 00:19:30.196 UUID: da7c86e2-1285-448c-8b9d-b8a6d0c00f79 00:19:30.196 Thin Provisioning: Not Supported 00:19:30.196 Per-NS Atomic Units: Yes 00:19:30.196 Atomic Boundary Size (Normal): 0 00:19:30.196 Atomic Boundary Size (PFail): 0 00:19:30.196 Atomic Boundary Offset: 0 00:19:30.196 Maximum Single Source Range Length: 65535 00:19:30.196 Maximum Copy Length: 65535 00:19:30.196 Maximum Source Range Count: 1 00:19:30.196 NGUID/EUI64 Never Reused: No 00:19:30.196 Namespace Write Protected: No 00:19:30.196 Number of LBA Formats: 1 00:19:30.196 Current LBA Format: LBA Format #00 00:19:30.196 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:30.196 00:19:30.196 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:30.457 [2024-10-01 15:37:09.730267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:35.742 Initializing NVMe Controllers 00:19:35.742 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:35.742 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:35.742 Initialization complete. Launching workers. 00:19:35.742 ======================================================== 00:19:35.742 Latency(us) 00:19:35.742 Device Information : IOPS MiB/s Average min max 00:19:35.742 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39970.57 156.14 3203.15 835.88 8782.31 00:19:35.742 ======================================================== 00:19:35.742 Total : 39970.57 156.14 3203.15 835.88 8782.31 00:19:35.742 00:19:35.742 [2024-10-01 15:37:14.838094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:35.742 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:35.742 [2024-10-01 15:37:15.017632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:41.023 Initializing NVMe Controllers 00:19:41.023 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:41.023 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:41.023 Initialization complete. Launching workers. 00:19:41.023 ======================================================== 00:19:41.023 Latency(us) 00:19:41.023 Device Information : IOPS MiB/s Average min max 00:19:41.023 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39962.00 156.10 3202.92 847.04 6827.54 00:19:41.023 ======================================================== 00:19:41.023 Total : 39962.00 156.10 3202.92 847.04 6827.54 00:19:41.023 00:19:41.023 [2024-10-01 15:37:20.036452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:41.023 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:41.023 [2024-10-01 15:37:20.229663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:46.304 [2024-10-01 15:37:25.366972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:46.304 Initializing NVMe Controllers 00:19:46.304 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:46.304 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:46.304 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:46.304 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:46.304 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:46.304 Initialization complete. Launching workers. 00:19:46.304 Starting thread on core 2 00:19:46.304 Starting thread on core 3 00:19:46.304 Starting thread on core 1 00:19:46.304 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:46.304 [2024-10-01 15:37:25.604304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:49.602 [2024-10-01 15:37:28.689846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:49.602 Initializing NVMe Controllers 00:19:49.602 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:49.602 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:49.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:49.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:49.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:49.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:49.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:49.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:49.602 Initialization complete. Launching workers. 00:19:49.602 Starting thread on core 1 with urgent priority queue 00:19:49.602 Starting thread on core 2 with urgent priority queue 00:19:49.602 Starting thread on core 3 with urgent priority queue 00:19:49.602 Starting thread on core 0 with urgent priority queue 00:19:49.602 SPDK bdev Controller (SPDK2 ) core 0: 11389.67 IO/s 8.78 secs/100000 ios 00:19:49.602 SPDK bdev Controller (SPDK2 ) core 1: 7479.00 IO/s 13.37 secs/100000 ios 00:19:49.602 SPDK bdev Controller (SPDK2 ) core 2: 10481.33 IO/s 9.54 secs/100000 ios 00:19:49.602 SPDK bdev Controller (SPDK2 ) core 3: 10501.67 IO/s 9.52 secs/100000 ios 00:19:49.602 ======================================================== 00:19:49.602 00:19:49.602 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:49.602 [2024-10-01 15:37:28.918315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:49.602 Initializing NVMe Controllers 00:19:49.602 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:49.602 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:49.602 Namespace ID: 1 size: 0GB 00:19:49.602 Initialization complete. 00:19:49.602 INFO: using host memory buffer for IO 00:19:49.602 Hello world! 00:19:49.602 [2024-10-01 15:37:28.930404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:49.602 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:49.862 [2024-10-01 15:37:29.151790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:50.808 Initializing NVMe Controllers 00:19:50.808 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:50.808 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:50.808 Initialization complete. Launching workers. 00:19:50.808 submit (in ns) avg, min, max = 6447.4, 2837.5, 3999945.8 00:19:50.808 complete (in ns) avg, min, max = 15338.4, 1630.8, 3999158.3 00:19:50.808 00:19:50.808 Submit histogram 00:19:50.808 ================ 00:19:50.808 Range in us Cumulative Count 00:19:50.808 2.827 - 2.840: 0.0048% ( 1) 00:19:50.808 2.840 - 2.853: 0.8214% ( 170) 00:19:50.808 2.853 - 2.867: 2.8388% ( 420) 00:19:50.808 2.867 - 2.880: 5.2500% ( 502) 00:19:50.808 2.880 - 2.893: 7.6373% ( 497) 00:19:50.808 2.893 - 2.907: 10.9996% ( 700) 00:19:50.808 2.907 - 2.920: 16.1487% ( 1072) 00:19:50.808 2.920 - 2.933: 22.6380% ( 1351) 00:19:50.808 2.933 - 2.947: 29.4827% ( 1425) 00:19:50.808 2.947 - 2.960: 36.0344% ( 1364) 00:19:50.808 2.960 - 2.973: 42.4324% ( 1332) 00:19:50.808 2.973 - 2.987: 48.3885% ( 1240) 00:19:50.808 2.987 - 3.000: 55.0651% ( 1390) 00:19:50.808 3.000 - 3.013: 63.7062% ( 1799) 00:19:50.808 3.013 - 3.027: 72.9718% ( 1929) 00:19:50.808 3.027 - 3.040: 80.8780% ( 1646) 00:19:50.808 3.040 - 3.053: 87.4922% ( 1377) 00:19:50.808 3.053 - 3.067: 92.4732% ( 1037) 00:19:50.808 3.067 - 3.080: 95.3888% ( 607) 00:19:50.808 3.080 - 3.093: 96.9163% ( 318) 00:19:50.808 3.093 - 3.107: 98.1555% ( 258) 00:19:50.808 3.107 - 3.120: 98.8760% ( 150) 00:19:50.808 3.120 - 3.133: 99.2507% ( 78) 00:19:50.808 3.133 - 3.147: 99.3948% ( 30) 00:19:50.808 3.147 - 3.160: 99.4476% ( 11) 00:19:50.808 3.160 - 3.173: 99.4572% ( 2) 00:19:50.808 3.173 - 3.187: 99.4716% ( 3) 00:19:50.808 3.187 - 3.200: 99.4764% ( 1) 00:19:50.808 3.240 - 3.253: 99.4860% ( 2) 00:19:50.808 3.253 - 3.267: 99.4957% ( 2) 00:19:50.808 3.267 - 3.280: 99.5053% ( 2) 00:19:50.808 3.307 - 3.320: 99.5101% ( 1) 00:19:50.808 3.320 - 3.333: 99.5149% ( 1) 00:19:50.808 3.347 - 3.360: 99.5245% ( 2) 00:19:50.808 3.373 - 3.387: 99.5389% ( 3) 00:19:50.808 3.400 - 3.413: 99.5437% ( 1) 00:19:50.808 3.413 - 3.440: 99.5485% ( 1) 00:19:50.808 3.467 - 3.493: 99.5533% ( 1) 00:19:50.808 3.493 - 3.520: 99.5629% ( 2) 00:19:50.808 3.520 - 3.547: 99.5725% ( 2) 00:19:50.808 3.547 - 3.573: 99.5869% ( 3) 00:19:50.808 3.573 - 3.600: 99.5917% ( 1) 00:19:50.808 3.600 - 3.627: 99.6013% ( 2) 00:19:50.808 3.653 - 3.680: 99.6061% ( 1) 00:19:50.808 3.760 - 3.787: 99.6109% ( 1) 00:19:50.808 3.787 - 3.813: 99.6157% ( 1) 00:19:50.808 3.813 - 3.840: 99.6205% ( 1) 00:19:50.808 4.187 - 4.213: 99.6253% ( 1) 00:19:50.808 4.213 - 4.240: 99.6301% ( 1) 00:19:50.808 4.507 - 4.533: 99.6349% ( 1) 00:19:50.808 4.720 - 4.747: 99.6446% ( 2) 00:19:50.808 4.773 - 4.800: 99.6494% ( 1) 00:19:50.808 4.800 - 4.827: 99.6590% ( 2) 00:19:50.808 4.853 - 4.880: 99.6638% ( 1) 00:19:50.808 4.960 - 4.987: 99.6734% ( 2) 00:19:50.808 4.987 - 5.013: 99.6830% ( 2) 00:19:50.808 5.093 - 5.120: 99.6878% ( 1) 00:19:50.808 5.120 - 5.147: 99.6926% ( 1) 00:19:50.808 5.147 - 5.173: 99.6974% ( 1) 00:19:50.808 5.227 - 5.253: 99.7022% ( 1) 00:19:50.808 5.307 - 5.333: 99.7118% ( 2) 00:19:50.808 5.467 - 5.493: 99.7166% ( 1) 00:19:50.808 5.520 - 5.547: 99.7214% ( 1) 00:19:50.808 5.547 - 5.573: 99.7262% ( 1) 00:19:50.808 5.600 - 5.627: 99.7310% ( 1) 00:19:50.808 5.733 - 5.760: 99.7358% ( 1) 00:19:50.808 5.813 - 5.840: 99.7406% ( 1) 00:19:50.808 5.893 - 5.920: 99.7454% ( 1) 00:19:50.808 5.947 - 5.973: 99.7550% ( 2) 00:19:50.808 6.000 - 6.027: 99.7598% ( 1) 00:19:50.808 6.133 - 6.160: 99.7646% ( 1) 00:19:50.808 6.160 - 6.187: 99.7742% ( 2) 00:19:50.808 6.213 - 6.240: 99.7790% ( 1) 00:19:50.809 6.240 - 6.267: 99.7839% ( 1) 00:19:50.809 6.320 - 6.347: 99.7887% ( 1) 00:19:50.809 6.373 - 6.400: 99.7935% ( 1) 00:19:50.809 6.400 - 6.427: 99.7983% ( 1) 00:19:50.809 6.453 - 6.480: 99.8031% ( 1) 00:19:50.809 6.480 - 6.507: 99.8079% ( 1) 00:19:50.809 [2024-10-01 15:37:30.243443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:51.086 6.533 - 6.560: 99.8127% ( 1) 00:19:51.086 6.560 - 6.587: 99.8175% ( 1) 00:19:51.086 6.587 - 6.613: 99.8223% ( 1) 00:19:51.086 6.613 - 6.640: 99.8271% ( 1) 00:19:51.086 6.667 - 6.693: 99.8319% ( 1) 00:19:51.086 6.933 - 6.987: 99.8415% ( 2) 00:19:51.086 6.987 - 7.040: 99.8511% ( 2) 00:19:51.086 7.040 - 7.093: 99.8607% ( 2) 00:19:51.086 7.147 - 7.200: 99.8655% ( 1) 00:19:51.086 7.253 - 7.307: 99.8703% ( 1) 00:19:51.086 7.413 - 7.467: 99.8751% ( 1) 00:19:51.086 7.467 - 7.520: 99.8895% ( 3) 00:19:51.086 7.520 - 7.573: 99.8943% ( 1) 00:19:51.086 7.573 - 7.627: 99.8991% ( 1) 00:19:51.086 8.053 - 8.107: 99.9039% ( 1) 00:19:51.086 8.480 - 8.533: 99.9087% ( 1) 00:19:51.086 9.280 - 9.333: 99.9135% ( 1) 00:19:51.086 3986.773 - 4014.080: 100.0000% ( 18) 00:19:51.086 00:19:51.086 Complete histogram 00:19:51.086 ================== 00:19:51.086 Range in us Cumulative Count 00:19:51.086 1.627 - 1.633: 0.0144% ( 3) 00:19:51.086 1.633 - 1.640: 0.0192% ( 1) 00:19:51.086 1.640 - 1.647: 0.5332% ( 107) 00:19:51.086 1.647 - 1.653: 1.1000% ( 118) 00:19:51.086 1.653 - 1.660: 1.1720% ( 15) 00:19:51.086 1.660 - 1.667: 1.2825% ( 23) 00:19:51.086 1.667 - 1.673: 1.3689% ( 18) 00:19:51.086 1.673 - 1.680: 1.4026% ( 7) 00:19:51.086 1.680 - 1.687: 32.2398% ( 6420) 00:19:51.086 1.687 - 1.693: 50.3915% ( 3779) 00:19:51.086 1.693 - 1.700: 55.6943% ( 1104) 00:19:51.086 1.700 - 1.707: 69.5182% ( 2878) 00:19:51.086 1.707 - 1.720: 80.0375% ( 2190) 00:19:51.086 1.720 - 1.733: 82.7321% ( 561) 00:19:51.086 1.733 - 1.747: 84.2644% ( 319) 00:19:51.086 1.747 - 1.760: 88.9764% ( 981) 00:19:51.086 1.760 - 1.773: 94.3129% ( 1111) 00:19:51.086 1.773 - 1.787: 97.5167% ( 667) 00:19:51.086 1.787 - 1.800: 98.9193% ( 292) 00:19:51.086 1.800 - 1.813: 99.3323% ( 86) 00:19:51.086 1.813 - 1.827: 99.3996% ( 14) 00:19:51.086 1.827 - 1.840: 99.4284% ( 6) 00:19:51.086 1.840 - 1.853: 99.4380% ( 2) 00:19:51.086 1.853 - 1.867: 99.4428% ( 1) 00:19:51.086 1.867 - 1.880: 99.4476% ( 1) 00:19:51.086 1.880 - 1.893: 99.4524% ( 1) 00:19:51.086 1.893 - 1.907: 99.4572% ( 1) 00:19:51.086 1.933 - 1.947: 99.4620% ( 1) 00:19:51.086 1.947 - 1.960: 99.4668% ( 1) 00:19:51.086 1.960 - 1.973: 99.4812% ( 3) 00:19:51.086 1.973 - 1.987: 99.4860% ( 1) 00:19:51.086 1.987 - 2.000: 99.4908% ( 1) 00:19:51.086 2.000 - 2.013: 99.4957% ( 1) 00:19:51.086 2.067 - 2.080: 99.5005% ( 1) 00:19:51.086 2.093 - 2.107: 99.5053% ( 1) 00:19:51.086 2.160 - 2.173: 99.5101% ( 1) 00:19:51.086 2.173 - 2.187: 99.5149% ( 1) 00:19:51.086 2.267 - 2.280: 99.5197% ( 1) 00:19:51.086 3.573 - 3.600: 99.5245% ( 1) 00:19:51.086 4.400 - 4.427: 99.5293% ( 1) 00:19:51.086 4.507 - 4.533: 99.5341% ( 1) 00:19:51.086 4.533 - 4.560: 99.5389% ( 1) 00:19:51.086 4.667 - 4.693: 99.5437% ( 1) 00:19:51.086 4.800 - 4.827: 99.5485% ( 1) 00:19:51.086 4.853 - 4.880: 99.5533% ( 1) 00:19:51.086 4.907 - 4.933: 99.5629% ( 2) 00:19:51.086 4.987 - 5.013: 99.5677% ( 1) 00:19:51.086 5.067 - 5.093: 99.5725% ( 1) 00:19:51.086 5.147 - 5.173: 99.5773% ( 1) 00:19:51.086 5.653 - 5.680: 99.5821% ( 1) 00:19:51.086 5.707 - 5.733: 99.5869% ( 1) 00:19:51.086 5.760 - 5.787: 99.5917% ( 1) 00:19:51.086 5.920 - 5.947: 99.5965% ( 1) 00:19:51.086 6.000 - 6.027: 99.6013% ( 1) 00:19:51.086 6.427 - 6.453: 99.6109% ( 2) 00:19:51.086 6.453 - 6.480: 99.6157% ( 1) 00:19:51.086 6.480 - 6.507: 99.6205% ( 1) 00:19:51.086 6.533 - 6.560: 99.6253% ( 1) 00:19:51.086 6.640 - 6.667: 99.6301% ( 1) 00:19:51.086 8.693 - 8.747: 99.6349% ( 1) 00:19:51.086 10.507 - 10.560: 99.6398% ( 1) 00:19:51.086 11.840 - 11.893: 99.6446% ( 1) 00:19:51.086 13.120 - 13.173: 99.6494% ( 1) 00:19:51.086 13.653 - 13.760: 99.6542% ( 1) 00:19:51.086 33.920 - 34.133: 99.6590% ( 1) 00:19:51.086 3986.773 - 4014.080: 100.0000% ( 71) 00:19:51.086 00:19:51.086 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:51.086 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:51.086 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:51.086 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:51.086 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:51.086 [ 00:19:51.086 { 00:19:51.086 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:51.086 "subtype": "Discovery", 00:19:51.086 "listen_addresses": [], 00:19:51.086 "allow_any_host": true, 00:19:51.086 "hosts": [] 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:51.086 "subtype": "NVMe", 00:19:51.086 "listen_addresses": [ 00:19:51.086 { 00:19:51.086 "trtype": "VFIOUSER", 00:19:51.086 "adrfam": "IPv4", 00:19:51.086 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:51.086 "trsvcid": "0" 00:19:51.086 } 00:19:51.086 ], 00:19:51.086 "allow_any_host": true, 00:19:51.086 "hosts": [], 00:19:51.086 "serial_number": "SPDK1", 00:19:51.086 "model_number": "SPDK bdev Controller", 00:19:51.086 "max_namespaces": 32, 00:19:51.086 "min_cntlid": 1, 00:19:51.086 "max_cntlid": 65519, 00:19:51.086 "namespaces": [ 00:19:51.086 { 00:19:51.086 "nsid": 1, 00:19:51.086 "bdev_name": "Malloc1", 00:19:51.086 "name": "Malloc1", 00:19:51.086 "nguid": "A835700349D14B498478BF81F44A1EB3", 00:19:51.086 "uuid": "a8357003-49d1-4b49-8478-bf81f44a1eb3" 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "nsid": 2, 00:19:51.086 "bdev_name": "Malloc3", 00:19:51.086 "name": "Malloc3", 00:19:51.086 "nguid": "A3A206C6AE3A4D35B7C600FBAA5C0267", 00:19:51.086 "uuid": "a3a206c6-ae3a-4d35-b7c6-00fbaa5c0267" 00:19:51.086 } 00:19:51.086 ] 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:51.086 "subtype": "NVMe", 00:19:51.086 "listen_addresses": [ 00:19:51.086 { 00:19:51.086 "trtype": "VFIOUSER", 00:19:51.086 "adrfam": "IPv4", 00:19:51.086 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:51.086 "trsvcid": "0" 00:19:51.086 } 00:19:51.086 ], 00:19:51.086 "allow_any_host": true, 00:19:51.086 "hosts": [], 00:19:51.086 "serial_number": "SPDK2", 00:19:51.086 "model_number": "SPDK bdev Controller", 00:19:51.086 "max_namespaces": 32, 00:19:51.086 "min_cntlid": 1, 00:19:51.086 "max_cntlid": 65519, 00:19:51.086 "namespaces": [ 00:19:51.086 { 00:19:51.086 "nsid": 1, 00:19:51.086 "bdev_name": "Malloc2", 00:19:51.086 "name": "Malloc2", 00:19:51.086 "nguid": "DA7C86E21285448C8B9DB8A6D0C00F79", 00:19:51.086 "uuid": "da7c86e2-1285-448c-8b9d-b8a6d0c00f79" 00:19:51.086 } 00:19:51.086 ] 00:19:51.086 } 00:19:51.086 ] 00:19:51.086 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3124309 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:51.087 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:51.347 [2024-10-01 15:37:30.610271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:51.347 Malloc4 00:19:51.347 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:51.607 [2024-10-01 15:37:30.820742] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:51.607 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:51.607 Asynchronous Event Request test 00:19:51.607 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:51.607 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:51.607 Registering asynchronous event callbacks... 00:19:51.607 Starting namespace attribute notice tests for all controllers... 00:19:51.607 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:51.607 aer_cb - Changed Namespace 00:19:51.607 Cleaning up... 00:19:51.607 [ 00:19:51.607 { 00:19:51.607 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:51.607 "subtype": "Discovery", 00:19:51.607 "listen_addresses": [], 00:19:51.607 "allow_any_host": true, 00:19:51.607 "hosts": [] 00:19:51.607 }, 00:19:51.607 { 00:19:51.607 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:51.607 "subtype": "NVMe", 00:19:51.607 "listen_addresses": [ 00:19:51.607 { 00:19:51.607 "trtype": "VFIOUSER", 00:19:51.607 "adrfam": "IPv4", 00:19:51.607 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:51.607 "trsvcid": "0" 00:19:51.607 } 00:19:51.607 ], 00:19:51.607 "allow_any_host": true, 00:19:51.607 "hosts": [], 00:19:51.607 "serial_number": "SPDK1", 00:19:51.608 "model_number": "SPDK bdev Controller", 00:19:51.608 "max_namespaces": 32, 00:19:51.608 "min_cntlid": 1, 00:19:51.608 "max_cntlid": 65519, 00:19:51.608 "namespaces": [ 00:19:51.608 { 00:19:51.608 "nsid": 1, 00:19:51.608 "bdev_name": "Malloc1", 00:19:51.608 "name": "Malloc1", 00:19:51.608 "nguid": "A835700349D14B498478BF81F44A1EB3", 00:19:51.608 "uuid": "a8357003-49d1-4b49-8478-bf81f44a1eb3" 00:19:51.608 }, 00:19:51.608 { 00:19:51.608 "nsid": 2, 00:19:51.608 "bdev_name": "Malloc3", 00:19:51.608 "name": "Malloc3", 00:19:51.608 "nguid": "A3A206C6AE3A4D35B7C600FBAA5C0267", 00:19:51.608 "uuid": "a3a206c6-ae3a-4d35-b7c6-00fbaa5c0267" 00:19:51.608 } 00:19:51.608 ] 00:19:51.608 }, 00:19:51.608 { 00:19:51.608 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:51.608 "subtype": "NVMe", 00:19:51.608 "listen_addresses": [ 00:19:51.608 { 00:19:51.608 "trtype": "VFIOUSER", 00:19:51.608 "adrfam": "IPv4", 00:19:51.608 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:51.608 "trsvcid": "0" 00:19:51.608 } 00:19:51.608 ], 00:19:51.608 "allow_any_host": true, 00:19:51.608 "hosts": [], 00:19:51.608 "serial_number": "SPDK2", 00:19:51.608 "model_number": "SPDK bdev Controller", 00:19:51.608 "max_namespaces": 32, 00:19:51.608 "min_cntlid": 1, 00:19:51.608 "max_cntlid": 65519, 00:19:51.608 "namespaces": [ 00:19:51.608 { 00:19:51.608 "nsid": 1, 00:19:51.608 "bdev_name": "Malloc2", 00:19:51.608 "name": "Malloc2", 00:19:51.608 "nguid": "DA7C86E21285448C8B9DB8A6D0C00F79", 00:19:51.608 "uuid": "da7c86e2-1285-448c-8b9d-b8a6d0c00f79" 00:19:51.608 }, 00:19:51.608 { 00:19:51.608 "nsid": 2, 00:19:51.608 "bdev_name": "Malloc4", 00:19:51.608 "name": "Malloc4", 00:19:51.608 "nguid": "F3EA71B487BB4BFCB819E2B944E3092E", 00:19:51.608 "uuid": "f3ea71b4-87bb-4bfc-b819-e2b944e3092e" 00:19:51.608 } 00:19:51.608 ] 00:19:51.608 } 00:19:51.608 ] 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3124309 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3115283 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3115283 ']' 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3115283 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.608 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3115283 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3115283' 00:19:51.869 killing process with pid 3115283 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3115283 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3115283 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3124385 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3124385' 00:19:51.869 Process pid: 3124385 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3124385 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3124385 ']' 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.869 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:51.869 [2024-10-01 15:37:31.298022] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:51.869 [2024-10-01 15:37:31.298958] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:51.869 [2024-10-01 15:37:31.299000] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.129 [2024-10-01 15:37:31.330654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:52.129 [2024-10-01 15:37:31.376308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.129 [2024-10-01 15:37:31.404881] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.129 [2024-10-01 15:37:31.404922] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.129 [2024-10-01 15:37:31.404928] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.129 [2024-10-01 15:37:31.404934] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.129 [2024-10-01 15:37:31.404938] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.129 [2024-10-01 15:37:31.405102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.129 [2024-10-01 15:37:31.405259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.129 [2024-10-01 15:37:31.405410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.129 [2024-10-01 15:37:31.405412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.129 [2024-10-01 15:37:31.460990] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:52.129 [2024-10-01 15:37:31.462177] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:52.129 [2024-10-01 15:37:31.462627] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:52.129 [2024-10-01 15:37:31.463189] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:52.129 [2024-10-01 15:37:31.463223] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:52.701 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.701 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:52.701 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:54.085 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:54.085 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:54.085 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:54.085 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:54.085 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:54.085 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:54.085 Malloc1 00:19:54.086 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:54.346 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:54.607 15:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:54.869 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:54.869 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:54.869 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:54.869 Malloc2 00:19:54.869 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:55.129 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3124385 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3124385 ']' 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3124385 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.389 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3124385 00:19:55.649 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.649 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.649 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3124385' 00:19:55.649 killing process with pid 3124385 00:19:55.649 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3124385 00:19:55.649 15:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3124385 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:55.649 00:19:55.649 real 0m50.772s 00:19:55.649 user 3m14.474s 00:19:55.649 sys 0m2.717s 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:55.649 ************************************ 00:19:55.649 END TEST nvmf_vfio_user 00:19:55.649 ************************************ 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:55.649 ************************************ 00:19:55.649 START TEST nvmf_vfio_user_nvme_compliance 00:19:55.649 ************************************ 00:19:55.649 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:55.911 * Looking for test storage... 00:19:55.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.911 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:55.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.912 --rc genhtml_branch_coverage=1 00:19:55.912 --rc genhtml_function_coverage=1 00:19:55.912 --rc genhtml_legend=1 00:19:55.912 --rc geninfo_all_blocks=1 00:19:55.912 --rc geninfo_unexecuted_blocks=1 00:19:55.912 00:19:55.912 ' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:55.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.912 --rc genhtml_branch_coverage=1 00:19:55.912 --rc genhtml_function_coverage=1 00:19:55.912 --rc genhtml_legend=1 00:19:55.912 --rc geninfo_all_blocks=1 00:19:55.912 --rc geninfo_unexecuted_blocks=1 00:19:55.912 00:19:55.912 ' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:55.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.912 --rc genhtml_branch_coverage=1 00:19:55.912 --rc genhtml_function_coverage=1 00:19:55.912 --rc genhtml_legend=1 00:19:55.912 --rc geninfo_all_blocks=1 00:19:55.912 --rc geninfo_unexecuted_blocks=1 00:19:55.912 00:19:55.912 ' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:55.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.912 --rc genhtml_branch_coverage=1 00:19:55.912 --rc genhtml_function_coverage=1 00:19:55.912 --rc genhtml_legend=1 00:19:55.912 --rc geninfo_all_blocks=1 00:19:55.912 --rc geninfo_unexecuted_blocks=1 00:19:55.912 00:19:55.912 ' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:55.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3125201 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3125201' 00:19:55.912 Process pid: 3125201 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3125201 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3125201 ']' 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.912 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.913 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.913 15:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:56.174 [2024-10-01 15:37:35.379452] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:56.174 [2024-10-01 15:37:35.379508] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.174 [2024-10-01 15:37:35.411740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:56.174 [2024-10-01 15:37:35.459394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:56.174 [2024-10-01 15:37:35.496324] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.174 [2024-10-01 15:37:35.496370] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.174 [2024-10-01 15:37:35.496378] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.174 [2024-10-01 15:37:35.496383] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.174 [2024-10-01 15:37:35.496388] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.174 [2024-10-01 15:37:35.498932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.174 [2024-10-01 15:37:35.499213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.174 [2024-10-01 15:37:35.499213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.746 15:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.746 15:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:56.746 15:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:58.132 malloc0 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.132 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.133 15:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:58.133 00:19:58.133 00:19:58.133 CUnit - A unit testing framework for C - Version 2.1-3 00:19:58.133 http://cunit.sourceforge.net/ 00:19:58.133 00:19:58.133 00:19:58.133 Suite: nvme_compliance 00:19:58.133 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 15:37:37.414348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.133 [2024-10-01 15:37:37.415638] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:58.133 [2024-10-01 15:37:37.415650] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:58.133 [2024-10-01 15:37:37.415655] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:58.133 [2024-10-01 15:37:37.417367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.133 passed 00:19:58.133 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 15:37:37.495883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.133 [2024-10-01 15:37:37.498907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.133 passed 00:19:58.133 Test: admin_identify_ns ...[2024-10-01 15:37:37.576491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.397 [2024-10-01 15:37:37.635900] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:58.397 [2024-10-01 15:37:37.643903] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:58.397 [2024-10-01 15:37:37.664981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.397 passed 00:19:58.397 Test: admin_get_features_mandatory_features ...[2024-10-01 15:37:37.739098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.397 [2024-10-01 15:37:37.742122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.397 passed 00:19:58.398 Test: admin_get_features_optional_features ...[2024-10-01 15:37:37.818597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.398 [2024-10-01 15:37:37.821614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.398 passed 00:19:58.659 Test: admin_set_features_number_of_queues ...[2024-10-01 15:37:37.898378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.659 [2024-10-01 15:37:38.002976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.659 passed 00:19:58.659 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 15:37:38.076261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.659 [2024-10-01 15:37:38.079278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.659 passed 00:19:58.920 Test: admin_get_log_page_with_lpo ...[2024-10-01 15:37:38.155018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.920 [2024-10-01 15:37:38.224903] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:58.920 [2024-10-01 15:37:38.237948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.920 passed 00:19:58.920 Test: fabric_property_get ...[2024-10-01 15:37:38.311219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.920 [2024-10-01 15:37:38.312423] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:58.920 [2024-10-01 15:37:38.314242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.920 passed 00:19:59.180 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 15:37:38.390715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:59.180 [2024-10-01 15:37:38.391920] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:59.180 [2024-10-01 15:37:38.393740] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:59.180 passed 00:19:59.180 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 15:37:38.469443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:59.180 [2024-10-01 15:37:38.552905] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:59.180 [2024-10-01 15:37:38.568904] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:59.180 [2024-10-01 15:37:38.573974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:59.180 passed 00:19:59.442 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 15:37:38.648089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:59.442 [2024-10-01 15:37:38.649298] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:59.442 [2024-10-01 15:37:38.651109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:59.442 passed 00:19:59.442 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 15:37:38.727828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:59.442 [2024-10-01 15:37:38.804899] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:59.442 [2024-10-01 15:37:38.828900] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:59.442 [2024-10-01 15:37:38.833968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:59.442 passed 00:19:59.703 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 15:37:38.906250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:59.703 [2024-10-01 15:37:38.907449] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:59.703 [2024-10-01 15:37:38.907468] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:59.703 [2024-10-01 15:37:38.909271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:59.703 passed 00:19:59.703 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 15:37:38.986017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:59.703 [2024-10-01 15:37:39.079903] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:59.703 [2024-10-01 15:37:39.087901] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:59.703 [2024-10-01 15:37:39.095901] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:59.703 [2024-10-01 15:37:39.103900] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:59.703 [2024-10-01 15:37:39.132962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:59.964 passed 00:19:59.965 Test: admin_create_io_sq_verify_pc ...[2024-10-01 15:37:39.209667] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:59.965 [2024-10-01 15:37:39.225905] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:59.965 [2024-10-01 15:37:39.243822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:59.965 passed 00:19:59.965 Test: admin_create_io_qp_max_qps ...[2024-10-01 15:37:39.318311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:01.352 [2024-10-01 15:37:40.439903] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:20:01.613 [2024-10-01 15:37:40.831033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:01.613 passed 00:20:01.613 Test: admin_create_io_sq_shared_cq ...[2024-10-01 15:37:40.905283] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:01.613 [2024-10-01 15:37:41.037901] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:01.874 [2024-10-01 15:37:41.074949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:01.874 passed 00:20:01.874 00:20:01.874 Run Summary: Type Total Ran Passed Failed Inactive 00:20:01.874 suites 1 1 n/a 0 0 00:20:01.874 tests 18 18 18 0 0 00:20:01.874 asserts 360 360 360 0 n/a 00:20:01.874 00:20:01.874 Elapsed time = 1.506 seconds 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3125201 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3125201 ']' 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3125201 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3125201 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3125201' 00:20:01.874 killing process with pid 3125201 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3125201 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3125201 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:01.874 00:20:01.874 real 0m6.224s 00:20:01.874 user 0m17.595s 00:20:01.874 sys 0m0.564s 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.874 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:01.874 ************************************ 00:20:01.874 END TEST nvmf_vfio_user_nvme_compliance 00:20:01.874 ************************************ 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.136 ************************************ 00:20:02.136 START TEST nvmf_vfio_user_fuzz 00:20:02.136 ************************************ 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:02.136 * Looking for test storage... 00:20:02.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.136 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:02.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.137 --rc genhtml_branch_coverage=1 00:20:02.137 --rc genhtml_function_coverage=1 00:20:02.137 --rc genhtml_legend=1 00:20:02.137 --rc geninfo_all_blocks=1 00:20:02.137 --rc geninfo_unexecuted_blocks=1 00:20:02.137 00:20:02.137 ' 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:02.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.137 --rc genhtml_branch_coverage=1 00:20:02.137 --rc genhtml_function_coverage=1 00:20:02.137 --rc genhtml_legend=1 00:20:02.137 --rc geninfo_all_blocks=1 00:20:02.137 --rc geninfo_unexecuted_blocks=1 00:20:02.137 00:20:02.137 ' 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:02.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.137 --rc genhtml_branch_coverage=1 00:20:02.137 --rc genhtml_function_coverage=1 00:20:02.137 --rc genhtml_legend=1 00:20:02.137 --rc geninfo_all_blocks=1 00:20:02.137 --rc geninfo_unexecuted_blocks=1 00:20:02.137 00:20:02.137 ' 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:02.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.137 --rc genhtml_branch_coverage=1 00:20:02.137 --rc genhtml_function_coverage=1 00:20:02.137 --rc genhtml_legend=1 00:20:02.137 --rc geninfo_all_blocks=1 00:20:02.137 --rc geninfo_unexecuted_blocks=1 00:20:02.137 00:20:02.137 ' 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.137 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.398 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:02.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3126539 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3126539' 00:20:02.399 Process pid: 3126539 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3126539 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3126539 ']' 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:02.399 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.342 15:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.342 15:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:03.342 15:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 malloc0 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:04.285 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:36.406 Fuzzing completed. Shutting down the fuzz application 00:20:36.406 00:20:36.406 Dumping successful admin opcodes: 00:20:36.406 8, 9, 10, 24, 00:20:36.406 Dumping successful io opcodes: 00:20:36.406 0, 00:20:36.406 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1385510, total successful commands: 5438, random_seed: 2525433792 00:20:36.406 NS: 0x200003a1ef00 admin qp, Total commands completed: 317503, total successful commands: 2554, random_seed: 3709561856 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3126539 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3126539 ']' 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3126539 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3126539 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3126539' 00:20:36.406 killing process with pid 3126539 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3126539 00:20:36.406 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3126539 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:36.406 00:20:36.406 real 0m32.824s 00:20:36.406 user 0m36.294s 00:20:36.406 sys 0m24.978s 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.406 ************************************ 00:20:36.406 END TEST nvmf_vfio_user_fuzz 00:20:36.406 ************************************ 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.406 ************************************ 00:20:36.406 START TEST nvmf_auth_target 00:20:36.406 ************************************ 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:36.406 * Looking for test storage... 00:20:36.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.406 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:36.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.407 --rc genhtml_branch_coverage=1 00:20:36.407 --rc genhtml_function_coverage=1 00:20:36.407 --rc genhtml_legend=1 00:20:36.407 --rc geninfo_all_blocks=1 00:20:36.407 --rc geninfo_unexecuted_blocks=1 00:20:36.407 00:20:36.407 ' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:36.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.407 --rc genhtml_branch_coverage=1 00:20:36.407 --rc genhtml_function_coverage=1 00:20:36.407 --rc genhtml_legend=1 00:20:36.407 --rc geninfo_all_blocks=1 00:20:36.407 --rc geninfo_unexecuted_blocks=1 00:20:36.407 00:20:36.407 ' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:36.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.407 --rc genhtml_branch_coverage=1 00:20:36.407 --rc genhtml_function_coverage=1 00:20:36.407 --rc genhtml_legend=1 00:20:36.407 --rc geninfo_all_blocks=1 00:20:36.407 --rc geninfo_unexecuted_blocks=1 00:20:36.407 00:20:36.407 ' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:36.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.407 --rc genhtml_branch_coverage=1 00:20:36.407 --rc genhtml_function_coverage=1 00:20:36.407 --rc genhtml_legend=1 00:20:36.407 --rc geninfo_all_blocks=1 00:20:36.407 --rc geninfo_unexecuted_blocks=1 00:20:36.407 00:20:36.407 ' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:36.407 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.408 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:43.005 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:43.005 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:43.005 Found net devices under 0000:31:00.0: cvl_0_0 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:43.005 Found net devices under 0000:31:00.1: cvl_0_1 00:20:43.005 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.006 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:20:43.006 00:20:43.006 --- 10.0.0.2 ping statistics --- 00:20:43.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.006 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:20:43.006 00:20:43.006 --- 10.0.0.1 ping statistics --- 00:20:43.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.006 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3136582 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3136582 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3136582 ']' 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.006 15:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3136921 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=54bcc16f53e0bb87fd899897384fe4f1a5b262e1930d8604 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.XZ5 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 54bcc16f53e0bb87fd899897384fe4f1a5b262e1930d8604 0 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 54bcc16f53e0bb87fd899897384fe4f1a5b262e1930d8604 0 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=54bcc16f53e0bb87fd899897384fe4f1a5b262e1930d8604 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.XZ5 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.XZ5 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.XZ5 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8b85ca58e7f0f749952562e86e78309b43fde54ce8e92cfcd92aee0ad6eb7ad6 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.WlP 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8b85ca58e7f0f749952562e86e78309b43fde54ce8e92cfcd92aee0ad6eb7ad6 3 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8b85ca58e7f0f749952562e86e78309b43fde54ce8e92cfcd92aee0ad6eb7ad6 3 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8b85ca58e7f0f749952562e86e78309b43fde54ce8e92cfcd92aee0ad6eb7ad6 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.WlP 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.WlP 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.WlP 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:20:43.951 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=1b0054487154fdd0afc16c2f61ebf19f 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.gp8 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 1b0054487154fdd0afc16c2f61ebf19f 1 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 1b0054487154fdd0afc16c2f61ebf19f 1 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=1b0054487154fdd0afc16c2f61ebf19f 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.gp8 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.gp8 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.gp8 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:20:43.952 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8e531b5cddb5acac58a449fe7e4d153feb282ee23b690b70 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.PAK 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8e531b5cddb5acac58a449fe7e4d153feb282ee23b690b70 2 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8e531b5cddb5acac58a449fe7e4d153feb282ee23b690b70 2 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8e531b5cddb5acac58a449fe7e4d153feb282ee23b690b70 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.PAK 00:20:44.214 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.PAK 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.PAK 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8371a1c0afca068bb1ca5900efd942f5ce71aa898fab0ec7 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.g40 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8371a1c0afca068bb1ca5900efd942f5ce71aa898fab0ec7 2 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8371a1c0afca068bb1ca5900efd942f5ce71aa898fab0ec7 2 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8371a1c0afca068bb1ca5900efd942f5ce71aa898fab0ec7 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.g40 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.g40 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.g40 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=de8c5679ea6e7367f95fc297df7056b0 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.uYK 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key de8c5679ea6e7367f95fc297df7056b0 1 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 de8c5679ea6e7367f95fc297df7056b0 1 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=de8c5679ea6e7367f95fc297df7056b0 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.uYK 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.uYK 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.uYK 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c95d82a9cb8f46be33240c57618120b69d149badebd7ced1668eb95bc215ff5e 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.2Es 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c95d82a9cb8f46be33240c57618120b69d149badebd7ced1668eb95bc215ff5e 3 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c95d82a9cb8f46be33240c57618120b69d149badebd7ced1668eb95bc215ff5e 3 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c95d82a9cb8f46be33240c57618120b69d149badebd7ced1668eb95bc215ff5e 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:44.215 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.2Es 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.2Es 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2Es 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3136582 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3136582 ']' 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3136921 /var/tmp/host.sock 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3136921 ']' 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:44.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.492 15:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XZ5 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XZ5 00:20:44.817 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XZ5 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.WlP ]] 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WlP 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WlP 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WlP 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gp8 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gp8 00:20:45.143 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gp8 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.PAK ]] 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PAK 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PAK 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PAK 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.g40 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.g40 00:20:45.469 15:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.g40 00:20:45.753 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.uYK ]] 00:20:45.753 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uYK 00:20:45.753 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.753 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.753 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.753 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uYK 00:20:45.753 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uYK 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2Es 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2Es 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2Es 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:46.066 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:46.382 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:46.382 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.382 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.383 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.673 00:20:46.673 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.673 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.673 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.673 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.673 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.673 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.673 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.949 { 00:20:46.949 "cntlid": 1, 00:20:46.949 "qid": 0, 00:20:46.949 "state": "enabled", 00:20:46.949 "thread": "nvmf_tgt_poll_group_000", 00:20:46.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:46.949 "listen_address": { 00:20:46.949 "trtype": "TCP", 00:20:46.949 "adrfam": "IPv4", 00:20:46.949 "traddr": "10.0.0.2", 00:20:46.949 "trsvcid": "4420" 00:20:46.949 }, 00:20:46.949 "peer_address": { 00:20:46.949 "trtype": "TCP", 00:20:46.949 "adrfam": "IPv4", 00:20:46.949 "traddr": "10.0.0.1", 00:20:46.949 "trsvcid": "33530" 00:20:46.949 }, 00:20:46.949 "auth": { 00:20:46.949 "state": "completed", 00:20:46.949 "digest": "sha256", 00:20:46.949 "dhgroup": "null" 00:20:46.949 } 00:20:46.949 } 00:20:46.949 ]' 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.949 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.216 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:20:47.216 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:47.786 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.047 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.048 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.048 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.048 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.309 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.309 { 00:20:48.309 "cntlid": 3, 00:20:48.309 "qid": 0, 00:20:48.309 "state": "enabled", 00:20:48.309 "thread": "nvmf_tgt_poll_group_000", 00:20:48.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:48.309 "listen_address": { 00:20:48.309 "trtype": "TCP", 00:20:48.309 "adrfam": "IPv4", 00:20:48.309 "traddr": "10.0.0.2", 00:20:48.309 "trsvcid": "4420" 00:20:48.309 }, 00:20:48.309 "peer_address": { 00:20:48.309 "trtype": "TCP", 00:20:48.309 "adrfam": "IPv4", 00:20:48.309 "traddr": "10.0.0.1", 00:20:48.309 "trsvcid": "33558" 00:20:48.309 }, 00:20:48.309 "auth": { 00:20:48.309 "state": "completed", 00:20:48.309 "digest": "sha256", 00:20:48.309 "dhgroup": "null" 00:20:48.309 } 00:20:48.309 } 00:20:48.309 ]' 00:20:48.309 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.570 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.570 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.570 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.570 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.570 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.570 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.570 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.831 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:20:48.831 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:49.403 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.664 15:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.924 00:20:49.924 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.924 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.924 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.183 { 00:20:50.183 "cntlid": 5, 00:20:50.183 "qid": 0, 00:20:50.183 "state": "enabled", 00:20:50.183 "thread": "nvmf_tgt_poll_group_000", 00:20:50.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:50.183 "listen_address": { 00:20:50.183 "trtype": "TCP", 00:20:50.183 "adrfam": "IPv4", 00:20:50.183 "traddr": "10.0.0.2", 00:20:50.183 "trsvcid": "4420" 00:20:50.183 }, 00:20:50.183 "peer_address": { 00:20:50.183 "trtype": "TCP", 00:20:50.183 "adrfam": "IPv4", 00:20:50.183 "traddr": "10.0.0.1", 00:20:50.183 "trsvcid": "58994" 00:20:50.183 }, 00:20:50.183 "auth": { 00:20:50.183 "state": "completed", 00:20:50.183 "digest": "sha256", 00:20:50.183 "dhgroup": "null" 00:20:50.183 } 00:20:50.183 } 00:20:50.183 ]' 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.183 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.184 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.184 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.184 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.184 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.184 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.443 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:20:50.443 15:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:51.012 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.271 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.531 00:20:51.531 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.531 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.531 15:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.790 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.790 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.790 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.791 { 00:20:51.791 "cntlid": 7, 00:20:51.791 "qid": 0, 00:20:51.791 "state": "enabled", 00:20:51.791 "thread": "nvmf_tgt_poll_group_000", 00:20:51.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:51.791 "listen_address": { 00:20:51.791 "trtype": "TCP", 00:20:51.791 "adrfam": "IPv4", 00:20:51.791 "traddr": "10.0.0.2", 00:20:51.791 "trsvcid": "4420" 00:20:51.791 }, 00:20:51.791 "peer_address": { 00:20:51.791 "trtype": "TCP", 00:20:51.791 "adrfam": "IPv4", 00:20:51.791 "traddr": "10.0.0.1", 00:20:51.791 "trsvcid": "59012" 00:20:51.791 }, 00:20:51.791 "auth": { 00:20:51.791 "state": "completed", 00:20:51.791 "digest": "sha256", 00:20:51.791 "dhgroup": "null" 00:20:51.791 } 00:20:51.791 } 00:20:51.791 ]' 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.791 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.051 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:20:52.051 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:20:52.649 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.649 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.910 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.171 00:20:53.171 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.171 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.171 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.433 { 00:20:53.433 "cntlid": 9, 00:20:53.433 "qid": 0, 00:20:53.433 "state": "enabled", 00:20:53.433 "thread": "nvmf_tgt_poll_group_000", 00:20:53.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:53.433 "listen_address": { 00:20:53.433 "trtype": "TCP", 00:20:53.433 "adrfam": "IPv4", 00:20:53.433 "traddr": "10.0.0.2", 00:20:53.433 "trsvcid": "4420" 00:20:53.433 }, 00:20:53.433 "peer_address": { 00:20:53.433 "trtype": "TCP", 00:20:53.433 "adrfam": "IPv4", 00:20:53.433 "traddr": "10.0.0.1", 00:20:53.433 "trsvcid": "59032" 00:20:53.433 }, 00:20:53.433 "auth": { 00:20:53.433 "state": "completed", 00:20:53.433 "digest": "sha256", 00:20:53.433 "dhgroup": "ffdhe2048" 00:20:53.433 } 00:20:53.433 } 00:20:53.433 ]' 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.433 15:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.694 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:20:53.694 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:54.265 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.526 15:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.785 00:20:54.785 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.785 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.785 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.046 { 00:20:55.046 "cntlid": 11, 00:20:55.046 "qid": 0, 00:20:55.046 "state": "enabled", 00:20:55.046 "thread": "nvmf_tgt_poll_group_000", 00:20:55.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:55.046 "listen_address": { 00:20:55.046 "trtype": "TCP", 00:20:55.046 "adrfam": "IPv4", 00:20:55.046 "traddr": "10.0.0.2", 00:20:55.046 "trsvcid": "4420" 00:20:55.046 }, 00:20:55.046 "peer_address": { 00:20:55.046 "trtype": "TCP", 00:20:55.046 "adrfam": "IPv4", 00:20:55.046 "traddr": "10.0.0.1", 00:20:55.046 "trsvcid": "59070" 00:20:55.046 }, 00:20:55.046 "auth": { 00:20:55.046 "state": "completed", 00:20:55.046 "digest": "sha256", 00:20:55.046 "dhgroup": "ffdhe2048" 00:20:55.046 } 00:20:55.046 } 00:20:55.046 ]' 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.046 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.307 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:20:55.307 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:55.877 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.138 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.399 00:20:56.399 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.399 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.399 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.659 { 00:20:56.659 "cntlid": 13, 00:20:56.659 "qid": 0, 00:20:56.659 "state": "enabled", 00:20:56.659 "thread": "nvmf_tgt_poll_group_000", 00:20:56.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:56.659 "listen_address": { 00:20:56.659 "trtype": "TCP", 00:20:56.659 "adrfam": "IPv4", 00:20:56.659 "traddr": "10.0.0.2", 00:20:56.659 "trsvcid": "4420" 00:20:56.659 }, 00:20:56.659 "peer_address": { 00:20:56.659 "trtype": "TCP", 00:20:56.659 "adrfam": "IPv4", 00:20:56.659 "traddr": "10.0.0.1", 00:20:56.659 "trsvcid": "59088" 00:20:56.659 }, 00:20:56.659 "auth": { 00:20:56.659 "state": "completed", 00:20:56.659 "digest": "sha256", 00:20:56.659 "dhgroup": "ffdhe2048" 00:20:56.659 } 00:20:56.659 } 00:20:56.659 ]' 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.659 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.659 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.659 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.659 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.659 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.659 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.921 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:20:56.921 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:57.492 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.753 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.013 00:20:58.013 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.013 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.013 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.273 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.273 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.273 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.274 { 00:20:58.274 "cntlid": 15, 00:20:58.274 "qid": 0, 00:20:58.274 "state": "enabled", 00:20:58.274 "thread": "nvmf_tgt_poll_group_000", 00:20:58.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:58.274 "listen_address": { 00:20:58.274 "trtype": "TCP", 00:20:58.274 "adrfam": "IPv4", 00:20:58.274 "traddr": "10.0.0.2", 00:20:58.274 "trsvcid": "4420" 00:20:58.274 }, 00:20:58.274 "peer_address": { 00:20:58.274 "trtype": "TCP", 00:20:58.274 "adrfam": "IPv4", 00:20:58.274 "traddr": "10.0.0.1", 00:20:58.274 "trsvcid": "59110" 00:20:58.274 }, 00:20:58.274 "auth": { 00:20:58.274 "state": "completed", 00:20:58.274 "digest": "sha256", 00:20:58.274 "dhgroup": "ffdhe2048" 00:20:58.274 } 00:20:58.274 } 00:20:58.274 ]' 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.274 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.534 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:20:58.534 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:59.105 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:59.366 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:59.366 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.366 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:59.366 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:59.366 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.366 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.367 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.367 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.367 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.367 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.367 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.367 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.367 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.627 00:20:59.627 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.627 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.627 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.887 { 00:20:59.887 "cntlid": 17, 00:20:59.887 "qid": 0, 00:20:59.887 "state": "enabled", 00:20:59.887 "thread": "nvmf_tgt_poll_group_000", 00:20:59.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:59.887 "listen_address": { 00:20:59.887 "trtype": "TCP", 00:20:59.887 "adrfam": "IPv4", 00:20:59.887 "traddr": "10.0.0.2", 00:20:59.887 "trsvcid": "4420" 00:20:59.887 }, 00:20:59.887 "peer_address": { 00:20:59.887 "trtype": "TCP", 00:20:59.887 "adrfam": "IPv4", 00:20:59.887 "traddr": "10.0.0.1", 00:20:59.887 "trsvcid": "46066" 00:20:59.887 }, 00:20:59.887 "auth": { 00:20:59.887 "state": "completed", 00:20:59.887 "digest": "sha256", 00:20:59.887 "dhgroup": "ffdhe3072" 00:20:59.887 } 00:20:59.887 } 00:20:59.887 ]' 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.887 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.148 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:00.148 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:00.719 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.979 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.239 00:21:01.239 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.239 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.239 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.499 { 00:21:01.499 "cntlid": 19, 00:21:01.499 "qid": 0, 00:21:01.499 "state": "enabled", 00:21:01.499 "thread": "nvmf_tgt_poll_group_000", 00:21:01.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:01.499 "listen_address": { 00:21:01.499 "trtype": "TCP", 00:21:01.499 "adrfam": "IPv4", 00:21:01.499 "traddr": "10.0.0.2", 00:21:01.499 "trsvcid": "4420" 00:21:01.499 }, 00:21:01.499 "peer_address": { 00:21:01.499 "trtype": "TCP", 00:21:01.499 "adrfam": "IPv4", 00:21:01.499 "traddr": "10.0.0.1", 00:21:01.499 "trsvcid": "46106" 00:21:01.499 }, 00:21:01.499 "auth": { 00:21:01.499 "state": "completed", 00:21:01.499 "digest": "sha256", 00:21:01.499 "dhgroup": "ffdhe3072" 00:21:01.499 } 00:21:01.499 } 00:21:01.499 ]' 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.499 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.760 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.760 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.760 15:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.760 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:01.760 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:02.699 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.699 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.699 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.699 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.699 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.700 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.700 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:02.700 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.700 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.960 00:21:02.960 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.960 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.960 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.221 { 00:21:03.221 "cntlid": 21, 00:21:03.221 "qid": 0, 00:21:03.221 "state": "enabled", 00:21:03.221 "thread": "nvmf_tgt_poll_group_000", 00:21:03.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:03.221 "listen_address": { 00:21:03.221 "trtype": "TCP", 00:21:03.221 "adrfam": "IPv4", 00:21:03.221 "traddr": "10.0.0.2", 00:21:03.221 "trsvcid": "4420" 00:21:03.221 }, 00:21:03.221 "peer_address": { 00:21:03.221 "trtype": "TCP", 00:21:03.221 "adrfam": "IPv4", 00:21:03.221 "traddr": "10.0.0.1", 00:21:03.221 "trsvcid": "46132" 00:21:03.221 }, 00:21:03.221 "auth": { 00:21:03.221 "state": "completed", 00:21:03.221 "digest": "sha256", 00:21:03.221 "dhgroup": "ffdhe3072" 00:21:03.221 } 00:21:03.221 } 00:21:03.221 ]' 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.221 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.482 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:03.482 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:04.053 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.053 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.053 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.053 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.053 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.053 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.053 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:04.054 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.315 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.576 00:21:04.576 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.576 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.576 15:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.837 { 00:21:04.837 "cntlid": 23, 00:21:04.837 "qid": 0, 00:21:04.837 "state": "enabled", 00:21:04.837 "thread": "nvmf_tgt_poll_group_000", 00:21:04.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:04.837 "listen_address": { 00:21:04.837 "trtype": "TCP", 00:21:04.837 "adrfam": "IPv4", 00:21:04.837 "traddr": "10.0.0.2", 00:21:04.837 "trsvcid": "4420" 00:21:04.837 }, 00:21:04.837 "peer_address": { 00:21:04.837 "trtype": "TCP", 00:21:04.837 "adrfam": "IPv4", 00:21:04.837 "traddr": "10.0.0.1", 00:21:04.837 "trsvcid": "46150" 00:21:04.837 }, 00:21:04.837 "auth": { 00:21:04.837 "state": "completed", 00:21:04.837 "digest": "sha256", 00:21:04.837 "dhgroup": "ffdhe3072" 00:21:04.837 } 00:21:04.837 } 00:21:04.837 ]' 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.837 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.098 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:05.098 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:05.669 15:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:05.669 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.930 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.192 00:21:06.192 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.192 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.192 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.453 { 00:21:06.453 "cntlid": 25, 00:21:06.453 "qid": 0, 00:21:06.453 "state": "enabled", 00:21:06.453 "thread": "nvmf_tgt_poll_group_000", 00:21:06.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:06.453 "listen_address": { 00:21:06.453 "trtype": "TCP", 00:21:06.453 "adrfam": "IPv4", 00:21:06.453 "traddr": "10.0.0.2", 00:21:06.453 "trsvcid": "4420" 00:21:06.453 }, 00:21:06.453 "peer_address": { 00:21:06.453 "trtype": "TCP", 00:21:06.453 "adrfam": "IPv4", 00:21:06.453 "traddr": "10.0.0.1", 00:21:06.453 "trsvcid": "46186" 00:21:06.453 }, 00:21:06.453 "auth": { 00:21:06.453 "state": "completed", 00:21:06.453 "digest": "sha256", 00:21:06.453 "dhgroup": "ffdhe4096" 00:21:06.453 } 00:21:06.453 } 00:21:06.453 ]' 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.453 15:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.714 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:06.714 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:07.283 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:07.544 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:07.544 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.544 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:07.544 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:07.544 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.545 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.806 00:21:07.806 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.806 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.806 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.067 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.067 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.067 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.067 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.067 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.067 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.067 { 00:21:08.067 "cntlid": 27, 00:21:08.067 "qid": 0, 00:21:08.067 "state": "enabled", 00:21:08.067 "thread": "nvmf_tgt_poll_group_000", 00:21:08.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:08.067 "listen_address": { 00:21:08.067 "trtype": "TCP", 00:21:08.067 "adrfam": "IPv4", 00:21:08.067 "traddr": "10.0.0.2", 00:21:08.067 "trsvcid": "4420" 00:21:08.067 }, 00:21:08.067 "peer_address": { 00:21:08.067 "trtype": "TCP", 00:21:08.068 "adrfam": "IPv4", 00:21:08.068 "traddr": "10.0.0.1", 00:21:08.068 "trsvcid": "46218" 00:21:08.068 }, 00:21:08.068 "auth": { 00:21:08.068 "state": "completed", 00:21:08.068 "digest": "sha256", 00:21:08.068 "dhgroup": "ffdhe4096" 00:21:08.068 } 00:21:08.068 } 00:21:08.068 ]' 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.068 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.330 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:08.330 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.902 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.163 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.424 00:21:09.424 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.424 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.424 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.685 { 00:21:09.685 "cntlid": 29, 00:21:09.685 "qid": 0, 00:21:09.685 "state": "enabled", 00:21:09.685 "thread": "nvmf_tgt_poll_group_000", 00:21:09.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:09.685 "listen_address": { 00:21:09.685 "trtype": "TCP", 00:21:09.685 "adrfam": "IPv4", 00:21:09.685 "traddr": "10.0.0.2", 00:21:09.685 "trsvcid": "4420" 00:21:09.685 }, 00:21:09.685 "peer_address": { 00:21:09.685 "trtype": "TCP", 00:21:09.685 "adrfam": "IPv4", 00:21:09.685 "traddr": "10.0.0.1", 00:21:09.685 "trsvcid": "46252" 00:21:09.685 }, 00:21:09.685 "auth": { 00:21:09.685 "state": "completed", 00:21:09.685 "digest": "sha256", 00:21:09.685 "dhgroup": "ffdhe4096" 00:21:09.685 } 00:21:09.685 } 00:21:09.685 ]' 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.685 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.685 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.685 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.685 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.685 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.686 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.946 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:09.946 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:10.517 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.517 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:10.517 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.517 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.778 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.778 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.778 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:10.778 15:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.778 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.038 00:21:11.038 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.038 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.038 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.298 { 00:21:11.298 "cntlid": 31, 00:21:11.298 "qid": 0, 00:21:11.298 "state": "enabled", 00:21:11.298 "thread": "nvmf_tgt_poll_group_000", 00:21:11.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:11.298 "listen_address": { 00:21:11.298 "trtype": "TCP", 00:21:11.298 "adrfam": "IPv4", 00:21:11.298 "traddr": "10.0.0.2", 00:21:11.298 "trsvcid": "4420" 00:21:11.298 }, 00:21:11.298 "peer_address": { 00:21:11.298 "trtype": "TCP", 00:21:11.298 "adrfam": "IPv4", 00:21:11.298 "traddr": "10.0.0.1", 00:21:11.298 "trsvcid": "59680" 00:21:11.298 }, 00:21:11.298 "auth": { 00:21:11.298 "state": "completed", 00:21:11.298 "digest": "sha256", 00:21:11.298 "dhgroup": "ffdhe4096" 00:21:11.298 } 00:21:11.298 } 00:21:11.298 ]' 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.298 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.299 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.299 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.299 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.559 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.559 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.559 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.559 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:11.559 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.502 15:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.763 00:21:12.763 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.763 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.763 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.024 { 00:21:13.024 "cntlid": 33, 00:21:13.024 "qid": 0, 00:21:13.024 "state": "enabled", 00:21:13.024 "thread": "nvmf_tgt_poll_group_000", 00:21:13.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:13.024 "listen_address": { 00:21:13.024 "trtype": "TCP", 00:21:13.024 "adrfam": "IPv4", 00:21:13.024 "traddr": "10.0.0.2", 00:21:13.024 "trsvcid": "4420" 00:21:13.024 }, 00:21:13.024 "peer_address": { 00:21:13.024 "trtype": "TCP", 00:21:13.024 "adrfam": "IPv4", 00:21:13.024 "traddr": "10.0.0.1", 00:21:13.024 "trsvcid": "59692" 00:21:13.024 }, 00:21:13.024 "auth": { 00:21:13.024 "state": "completed", 00:21:13.024 "digest": "sha256", 00:21:13.024 "dhgroup": "ffdhe6144" 00:21:13.024 } 00:21:13.024 } 00:21:13.024 ]' 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.024 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.285 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.285 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.285 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.285 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:13.285 15:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.227 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.228 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.488 00:21:14.488 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.488 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.488 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.749 { 00:21:14.749 "cntlid": 35, 00:21:14.749 "qid": 0, 00:21:14.749 "state": "enabled", 00:21:14.749 "thread": "nvmf_tgt_poll_group_000", 00:21:14.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:14.749 "listen_address": { 00:21:14.749 "trtype": "TCP", 00:21:14.749 "adrfam": "IPv4", 00:21:14.749 "traddr": "10.0.0.2", 00:21:14.749 "trsvcid": "4420" 00:21:14.749 }, 00:21:14.749 "peer_address": { 00:21:14.749 "trtype": "TCP", 00:21:14.749 "adrfam": "IPv4", 00:21:14.749 "traddr": "10.0.0.1", 00:21:14.749 "trsvcid": "59708" 00:21:14.749 }, 00:21:14.749 "auth": { 00:21:14.749 "state": "completed", 00:21:14.749 "digest": "sha256", 00:21:14.749 "dhgroup": "ffdhe6144" 00:21:14.749 } 00:21:14.749 } 00:21:14.749 ]' 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.749 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.009 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:15.009 15:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:15.580 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.841 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.102 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.363 { 00:21:16.363 "cntlid": 37, 00:21:16.363 "qid": 0, 00:21:16.363 "state": "enabled", 00:21:16.363 "thread": "nvmf_tgt_poll_group_000", 00:21:16.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:16.363 "listen_address": { 00:21:16.363 "trtype": "TCP", 00:21:16.363 "adrfam": "IPv4", 00:21:16.363 "traddr": "10.0.0.2", 00:21:16.363 "trsvcid": "4420" 00:21:16.363 }, 00:21:16.363 "peer_address": { 00:21:16.363 "trtype": "TCP", 00:21:16.363 "adrfam": "IPv4", 00:21:16.363 "traddr": "10.0.0.1", 00:21:16.363 "trsvcid": "59734" 00:21:16.363 }, 00:21:16.363 "auth": { 00:21:16.363 "state": "completed", 00:21:16.363 "digest": "sha256", 00:21:16.363 "dhgroup": "ffdhe6144" 00:21:16.363 } 00:21:16.363 } 00:21:16.363 ]' 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.363 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.624 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.624 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.624 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.624 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.624 15:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.624 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:16.624 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.566 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.826 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.087 { 00:21:18.087 "cntlid": 39, 00:21:18.087 "qid": 0, 00:21:18.087 "state": "enabled", 00:21:18.087 "thread": "nvmf_tgt_poll_group_000", 00:21:18.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:18.087 "listen_address": { 00:21:18.087 "trtype": "TCP", 00:21:18.087 "adrfam": "IPv4", 00:21:18.087 "traddr": "10.0.0.2", 00:21:18.087 "trsvcid": "4420" 00:21:18.087 }, 00:21:18.087 "peer_address": { 00:21:18.087 "trtype": "TCP", 00:21:18.087 "adrfam": "IPv4", 00:21:18.087 "traddr": "10.0.0.1", 00:21:18.087 "trsvcid": "59748" 00:21:18.087 }, 00:21:18.087 "auth": { 00:21:18.087 "state": "completed", 00:21:18.087 "digest": "sha256", 00:21:18.087 "dhgroup": "ffdhe6144" 00:21:18.087 } 00:21:18.087 } 00:21:18.087 ]' 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.087 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.347 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:18.347 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.347 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.347 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.347 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.347 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:18.347 15:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.287 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.857 00:21:19.857 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.857 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.857 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.117 { 00:21:20.117 "cntlid": 41, 00:21:20.117 "qid": 0, 00:21:20.117 "state": "enabled", 00:21:20.117 "thread": "nvmf_tgt_poll_group_000", 00:21:20.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:20.117 "listen_address": { 00:21:20.117 "trtype": "TCP", 00:21:20.117 "adrfam": "IPv4", 00:21:20.117 "traddr": "10.0.0.2", 00:21:20.117 "trsvcid": "4420" 00:21:20.117 }, 00:21:20.117 "peer_address": { 00:21:20.117 "trtype": "TCP", 00:21:20.117 "adrfam": "IPv4", 00:21:20.117 "traddr": "10.0.0.1", 00:21:20.117 "trsvcid": "56410" 00:21:20.117 }, 00:21:20.117 "auth": { 00:21:20.117 "state": "completed", 00:21:20.117 "digest": "sha256", 00:21:20.117 "dhgroup": "ffdhe8192" 00:21:20.117 } 00:21:20.117 } 00:21:20.117 ]' 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.117 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.375 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:20.375 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:20.944 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.203 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.771 00:21:21.771 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.771 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.771 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.771 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.771 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.771 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.771 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.771 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.771 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.771 { 00:21:21.771 "cntlid": 43, 00:21:21.771 "qid": 0, 00:21:21.771 "state": "enabled", 00:21:21.771 "thread": "nvmf_tgt_poll_group_000", 00:21:21.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.771 "listen_address": { 00:21:21.771 "trtype": "TCP", 00:21:21.771 "adrfam": "IPv4", 00:21:21.771 "traddr": "10.0.0.2", 00:21:21.771 "trsvcid": "4420" 00:21:21.771 }, 00:21:21.771 "peer_address": { 00:21:21.771 "trtype": "TCP", 00:21:21.771 "adrfam": "IPv4", 00:21:21.771 "traddr": "10.0.0.1", 00:21:21.771 "trsvcid": "56432" 00:21:21.771 }, 00:21:21.771 "auth": { 00:21:21.771 "state": "completed", 00:21:21.771 "digest": "sha256", 00:21:21.771 "dhgroup": "ffdhe8192" 00:21:21.771 } 00:21:21.771 } 00:21:21.771 ]' 00:21:21.771 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.030 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.030 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.030 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.030 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.030 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.030 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.030 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.290 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:22.290 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:22.859 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.118 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.378 00:21:23.638 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.639 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.639 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.639 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.639 { 00:21:23.639 "cntlid": 45, 00:21:23.639 "qid": 0, 00:21:23.639 "state": "enabled", 00:21:23.639 "thread": "nvmf_tgt_poll_group_000", 00:21:23.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:23.639 "listen_address": { 00:21:23.639 "trtype": "TCP", 00:21:23.639 "adrfam": "IPv4", 00:21:23.639 "traddr": "10.0.0.2", 00:21:23.639 "trsvcid": "4420" 00:21:23.639 }, 00:21:23.639 "peer_address": { 00:21:23.639 "trtype": "TCP", 00:21:23.639 "adrfam": "IPv4", 00:21:23.639 "traddr": "10.0.0.1", 00:21:23.639 "trsvcid": "56460" 00:21:23.639 }, 00:21:23.639 "auth": { 00:21:23.639 "state": "completed", 00:21:23.639 "digest": "sha256", 00:21:23.639 "dhgroup": "ffdhe8192" 00:21:23.639 } 00:21:23.639 } 00:21:23.639 ]' 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.639 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.898 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.898 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.898 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.898 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.898 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.899 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:23.899 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.836 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.407 00:21:25.407 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.407 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.407 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.668 { 00:21:25.668 "cntlid": 47, 00:21:25.668 "qid": 0, 00:21:25.668 "state": "enabled", 00:21:25.668 "thread": "nvmf_tgt_poll_group_000", 00:21:25.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:25.668 "listen_address": { 00:21:25.668 "trtype": "TCP", 00:21:25.668 "adrfam": "IPv4", 00:21:25.668 "traddr": "10.0.0.2", 00:21:25.668 "trsvcid": "4420" 00:21:25.668 }, 00:21:25.668 "peer_address": { 00:21:25.668 "trtype": "TCP", 00:21:25.668 "adrfam": "IPv4", 00:21:25.668 "traddr": "10.0.0.1", 00:21:25.668 "trsvcid": "56480" 00:21:25.668 }, 00:21:25.668 "auth": { 00:21:25.668 "state": "completed", 00:21:25.668 "digest": "sha256", 00:21:25.668 "dhgroup": "ffdhe8192" 00:21:25.668 } 00:21:25.668 } 00:21:25.668 ]' 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.668 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.668 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.668 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.668 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.927 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:25.927 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:26.496 15:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.759 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.019 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.019 { 00:21:27.019 "cntlid": 49, 00:21:27.019 "qid": 0, 00:21:27.019 "state": "enabled", 00:21:27.019 "thread": "nvmf_tgt_poll_group_000", 00:21:27.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:27.019 "listen_address": { 00:21:27.019 "trtype": "TCP", 00:21:27.019 "adrfam": "IPv4", 00:21:27.019 "traddr": "10.0.0.2", 00:21:27.019 "trsvcid": "4420" 00:21:27.019 }, 00:21:27.019 "peer_address": { 00:21:27.019 "trtype": "TCP", 00:21:27.019 "adrfam": "IPv4", 00:21:27.019 "traddr": "10.0.0.1", 00:21:27.019 "trsvcid": "56514" 00:21:27.019 }, 00:21:27.019 "auth": { 00:21:27.019 "state": "completed", 00:21:27.019 "digest": "sha384", 00:21:27.019 "dhgroup": "null" 00:21:27.019 } 00:21:27.019 } 00:21:27.019 ]' 00:21:27.019 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.280 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.280 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.280 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:27.280 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.280 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.280 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.280 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.542 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:27.542 15:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:28.113 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.374 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.634 00:21:28.634 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.634 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.634 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.634 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.634 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.634 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.634 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.634 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.634 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.634 { 00:21:28.634 "cntlid": 51, 00:21:28.634 "qid": 0, 00:21:28.634 "state": "enabled", 00:21:28.634 "thread": "nvmf_tgt_poll_group_000", 00:21:28.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:28.634 "listen_address": { 00:21:28.634 "trtype": "TCP", 00:21:28.634 "adrfam": "IPv4", 00:21:28.634 "traddr": "10.0.0.2", 00:21:28.634 "trsvcid": "4420" 00:21:28.634 }, 00:21:28.634 "peer_address": { 00:21:28.634 "trtype": "TCP", 00:21:28.634 "adrfam": "IPv4", 00:21:28.634 "traddr": "10.0.0.1", 00:21:28.634 "trsvcid": "56556" 00:21:28.634 }, 00:21:28.634 "auth": { 00:21:28.634 "state": "completed", 00:21:28.634 "digest": "sha384", 00:21:28.634 "dhgroup": "null" 00:21:28.634 } 00:21:28.634 } 00:21:28.634 ]' 00:21:28.895 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.895 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.895 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.895 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:28.895 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.895 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.895 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.896 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.156 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:29.156 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:29.726 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.987 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.987 00:21:30.248 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.248 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.248 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.248 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.248 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.248 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.249 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.249 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.249 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.249 { 00:21:30.249 "cntlid": 53, 00:21:30.249 "qid": 0, 00:21:30.249 "state": "enabled", 00:21:30.249 "thread": "nvmf_tgt_poll_group_000", 00:21:30.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:30.249 "listen_address": { 00:21:30.249 "trtype": "TCP", 00:21:30.249 "adrfam": "IPv4", 00:21:30.249 "traddr": "10.0.0.2", 00:21:30.249 "trsvcid": "4420" 00:21:30.249 }, 00:21:30.249 "peer_address": { 00:21:30.249 "trtype": "TCP", 00:21:30.249 "adrfam": "IPv4", 00:21:30.249 "traddr": "10.0.0.1", 00:21:30.249 "trsvcid": "42196" 00:21:30.249 }, 00:21:30.249 "auth": { 00:21:30.249 "state": "completed", 00:21:30.249 "digest": "sha384", 00:21:30.249 "dhgroup": "null" 00:21:30.249 } 00:21:30.249 } 00:21:30.249 ]' 00:21:30.249 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.249 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.249 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.510 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:30.510 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.510 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.510 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.510 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.510 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:30.510 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.451 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.712 00:21:31.712 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.712 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.712 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.975 { 00:21:31.975 "cntlid": 55, 00:21:31.975 "qid": 0, 00:21:31.975 "state": "enabled", 00:21:31.975 "thread": "nvmf_tgt_poll_group_000", 00:21:31.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:31.975 "listen_address": { 00:21:31.975 "trtype": "TCP", 00:21:31.975 "adrfam": "IPv4", 00:21:31.975 "traddr": "10.0.0.2", 00:21:31.975 "trsvcid": "4420" 00:21:31.975 }, 00:21:31.975 "peer_address": { 00:21:31.975 "trtype": "TCP", 00:21:31.975 "adrfam": "IPv4", 00:21:31.975 "traddr": "10.0.0.1", 00:21:31.975 "trsvcid": "42226" 00:21:31.975 }, 00:21:31.975 "auth": { 00:21:31.975 "state": "completed", 00:21:31.975 "digest": "sha384", 00:21:31.975 "dhgroup": "null" 00:21:31.975 } 00:21:31.975 } 00:21:31.975 ]' 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.975 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.237 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:32.237 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:32.808 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.070 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.332 00:21:33.332 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.332 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.332 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.593 { 00:21:33.593 "cntlid": 57, 00:21:33.593 "qid": 0, 00:21:33.593 "state": "enabled", 00:21:33.593 "thread": "nvmf_tgt_poll_group_000", 00:21:33.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.593 "listen_address": { 00:21:33.593 "trtype": "TCP", 00:21:33.593 "adrfam": "IPv4", 00:21:33.593 "traddr": "10.0.0.2", 00:21:33.593 "trsvcid": "4420" 00:21:33.593 }, 00:21:33.593 "peer_address": { 00:21:33.593 "trtype": "TCP", 00:21:33.593 "adrfam": "IPv4", 00:21:33.593 "traddr": "10.0.0.1", 00:21:33.593 "trsvcid": "42264" 00:21:33.593 }, 00:21:33.593 "auth": { 00:21:33.593 "state": "completed", 00:21:33.593 "digest": "sha384", 00:21:33.593 "dhgroup": "ffdhe2048" 00:21:33.593 } 00:21:33.593 } 00:21:33.593 ]' 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.593 15:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.855 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:33.855 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:34.427 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.688 15:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.949 00:21:34.949 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.949 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.949 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.209 { 00:21:35.209 "cntlid": 59, 00:21:35.209 "qid": 0, 00:21:35.209 "state": "enabled", 00:21:35.209 "thread": "nvmf_tgt_poll_group_000", 00:21:35.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:35.209 "listen_address": { 00:21:35.209 "trtype": "TCP", 00:21:35.209 "adrfam": "IPv4", 00:21:35.209 "traddr": "10.0.0.2", 00:21:35.209 "trsvcid": "4420" 00:21:35.209 }, 00:21:35.209 "peer_address": { 00:21:35.209 "trtype": "TCP", 00:21:35.209 "adrfam": "IPv4", 00:21:35.209 "traddr": "10.0.0.1", 00:21:35.209 "trsvcid": "42304" 00:21:35.209 }, 00:21:35.209 "auth": { 00:21:35.209 "state": "completed", 00:21:35.209 "digest": "sha384", 00:21:35.209 "dhgroup": "ffdhe2048" 00:21:35.209 } 00:21:35.209 } 00:21:35.209 ]' 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.209 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.472 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:35.472 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:36.045 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.046 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.046 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.046 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.046 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.046 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.046 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:36.046 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.307 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.568 00:21:36.568 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.568 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.568 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.829 { 00:21:36.829 "cntlid": 61, 00:21:36.829 "qid": 0, 00:21:36.829 "state": "enabled", 00:21:36.829 "thread": "nvmf_tgt_poll_group_000", 00:21:36.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:36.829 "listen_address": { 00:21:36.829 "trtype": "TCP", 00:21:36.829 "adrfam": "IPv4", 00:21:36.829 "traddr": "10.0.0.2", 00:21:36.829 "trsvcid": "4420" 00:21:36.829 }, 00:21:36.829 "peer_address": { 00:21:36.829 "trtype": "TCP", 00:21:36.829 "adrfam": "IPv4", 00:21:36.829 "traddr": "10.0.0.1", 00:21:36.829 "trsvcid": "42342" 00:21:36.829 }, 00:21:36.829 "auth": { 00:21:36.829 "state": "completed", 00:21:36.829 "digest": "sha384", 00:21:36.829 "dhgroup": "ffdhe2048" 00:21:36.829 } 00:21:36.829 } 00:21:36.829 ]' 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.829 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.089 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:37.089 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:37.660 15:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.660 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.660 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.660 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.660 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.660 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.660 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:37.660 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.180 00:21:38.180 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.180 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.180 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.180 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.180 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.181 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.181 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.440 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.440 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.440 { 00:21:38.440 "cntlid": 63, 00:21:38.440 "qid": 0, 00:21:38.440 "state": "enabled", 00:21:38.440 "thread": "nvmf_tgt_poll_group_000", 00:21:38.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:38.440 "listen_address": { 00:21:38.440 "trtype": "TCP", 00:21:38.440 "adrfam": "IPv4", 00:21:38.440 "traddr": "10.0.0.2", 00:21:38.440 "trsvcid": "4420" 00:21:38.440 }, 00:21:38.440 "peer_address": { 00:21:38.440 "trtype": "TCP", 00:21:38.440 "adrfam": "IPv4", 00:21:38.440 "traddr": "10.0.0.1", 00:21:38.440 "trsvcid": "42370" 00:21:38.440 }, 00:21:38.440 "auth": { 00:21:38.440 "state": "completed", 00:21:38.441 "digest": "sha384", 00:21:38.441 "dhgroup": "ffdhe2048" 00:21:38.441 } 00:21:38.441 } 00:21:38.441 ]' 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.441 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.701 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:38.701 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:39.272 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.533 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.795 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.795 { 00:21:39.795 "cntlid": 65, 00:21:39.795 "qid": 0, 00:21:39.795 "state": "enabled", 00:21:39.795 "thread": "nvmf_tgt_poll_group_000", 00:21:39.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:39.795 "listen_address": { 00:21:39.795 "trtype": "TCP", 00:21:39.795 "adrfam": "IPv4", 00:21:39.795 "traddr": "10.0.0.2", 00:21:39.795 "trsvcid": "4420" 00:21:39.795 }, 00:21:39.795 "peer_address": { 00:21:39.795 "trtype": "TCP", 00:21:39.795 "adrfam": "IPv4", 00:21:39.795 "traddr": "10.0.0.1", 00:21:39.795 "trsvcid": "52356" 00:21:39.795 }, 00:21:39.795 "auth": { 00:21:39.795 "state": "completed", 00:21:39.795 "digest": "sha384", 00:21:39.795 "dhgroup": "ffdhe3072" 00:21:39.795 } 00:21:39.795 } 00:21:39.795 ]' 00:21:39.795 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.055 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.055 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.055 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.055 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.055 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.055 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.055 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.316 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:40.316 15:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:40.889 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.151 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.412 00:21:41.412 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.412 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.412 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.412 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.412 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.412 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.412 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.673 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.673 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.673 { 00:21:41.673 "cntlid": 67, 00:21:41.673 "qid": 0, 00:21:41.673 "state": "enabled", 00:21:41.673 "thread": "nvmf_tgt_poll_group_000", 00:21:41.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:41.673 "listen_address": { 00:21:41.673 "trtype": "TCP", 00:21:41.673 "adrfam": "IPv4", 00:21:41.673 "traddr": "10.0.0.2", 00:21:41.673 "trsvcid": "4420" 00:21:41.673 }, 00:21:41.673 "peer_address": { 00:21:41.673 "trtype": "TCP", 00:21:41.673 "adrfam": "IPv4", 00:21:41.673 "traddr": "10.0.0.1", 00:21:41.673 "trsvcid": "52378" 00:21:41.673 }, 00:21:41.673 "auth": { 00:21:41.673 "state": "completed", 00:21:41.673 "digest": "sha384", 00:21:41.673 "dhgroup": "ffdhe3072" 00:21:41.673 } 00:21:41.673 } 00:21:41.673 ]' 00:21:41.673 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.673 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.673 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.673 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.673 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.673 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.673 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.673 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.935 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:41.935 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:42.507 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.770 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.031 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.031 { 00:21:43.031 "cntlid": 69, 00:21:43.031 "qid": 0, 00:21:43.031 "state": "enabled", 00:21:43.031 "thread": "nvmf_tgt_poll_group_000", 00:21:43.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:43.031 "listen_address": { 00:21:43.031 "trtype": "TCP", 00:21:43.031 "adrfam": "IPv4", 00:21:43.031 "traddr": "10.0.0.2", 00:21:43.031 "trsvcid": "4420" 00:21:43.031 }, 00:21:43.031 "peer_address": { 00:21:43.031 "trtype": "TCP", 00:21:43.031 "adrfam": "IPv4", 00:21:43.031 "traddr": "10.0.0.1", 00:21:43.031 "trsvcid": "52392" 00:21:43.031 }, 00:21:43.031 "auth": { 00:21:43.031 "state": "completed", 00:21:43.031 "digest": "sha384", 00:21:43.031 "dhgroup": "ffdhe3072" 00:21:43.031 } 00:21:43.031 } 00:21:43.031 ]' 00:21:43.031 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.291 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.291 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.291 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:43.291 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.291 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.291 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.291 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.552 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:43.552 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:44.123 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.384 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.645 00:21:44.645 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.645 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.645 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.645 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.645 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.645 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.645 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.645 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.645 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.645 { 00:21:44.645 "cntlid": 71, 00:21:44.645 "qid": 0, 00:21:44.645 "state": "enabled", 00:21:44.645 "thread": "nvmf_tgt_poll_group_000", 00:21:44.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:44.645 "listen_address": { 00:21:44.645 "trtype": "TCP", 00:21:44.645 "adrfam": "IPv4", 00:21:44.645 "traddr": "10.0.0.2", 00:21:44.645 "trsvcid": "4420" 00:21:44.645 }, 00:21:44.645 "peer_address": { 00:21:44.645 "trtype": "TCP", 00:21:44.645 "adrfam": "IPv4", 00:21:44.645 "traddr": "10.0.0.1", 00:21:44.645 "trsvcid": "52404" 00:21:44.645 }, 00:21:44.645 "auth": { 00:21:44.645 "state": "completed", 00:21:44.645 "digest": "sha384", 00:21:44.645 "dhgroup": "ffdhe3072" 00:21:44.645 } 00:21:44.645 } 00:21:44.645 ]' 00:21:44.646 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.906 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.906 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.906 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.906 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.906 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.906 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.906 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.166 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:45.166 15:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:45.737 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.997 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.998 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.257 00:21:46.257 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.257 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.257 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.257 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.258 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.258 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.258 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.518 { 00:21:46.518 "cntlid": 73, 00:21:46.518 "qid": 0, 00:21:46.518 "state": "enabled", 00:21:46.518 "thread": "nvmf_tgt_poll_group_000", 00:21:46.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:46.518 "listen_address": { 00:21:46.518 "trtype": "TCP", 00:21:46.518 "adrfam": "IPv4", 00:21:46.518 "traddr": "10.0.0.2", 00:21:46.518 "trsvcid": "4420" 00:21:46.518 }, 00:21:46.518 "peer_address": { 00:21:46.518 "trtype": "TCP", 00:21:46.518 "adrfam": "IPv4", 00:21:46.518 "traddr": "10.0.0.1", 00:21:46.518 "trsvcid": "52422" 00:21:46.518 }, 00:21:46.518 "auth": { 00:21:46.518 "state": "completed", 00:21:46.518 "digest": "sha384", 00:21:46.518 "dhgroup": "ffdhe4096" 00:21:46.518 } 00:21:46.518 } 00:21:46.518 ]' 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.518 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.778 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:46.778 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:47.349 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.609 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.869 00:21:47.869 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.869 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.869 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.129 { 00:21:48.129 "cntlid": 75, 00:21:48.129 "qid": 0, 00:21:48.129 "state": "enabled", 00:21:48.129 "thread": "nvmf_tgt_poll_group_000", 00:21:48.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:48.129 "listen_address": { 00:21:48.129 "trtype": "TCP", 00:21:48.129 "adrfam": "IPv4", 00:21:48.129 "traddr": "10.0.0.2", 00:21:48.129 "trsvcid": "4420" 00:21:48.129 }, 00:21:48.129 "peer_address": { 00:21:48.129 "trtype": "TCP", 00:21:48.129 "adrfam": "IPv4", 00:21:48.129 "traddr": "10.0.0.1", 00:21:48.129 "trsvcid": "52446" 00:21:48.129 }, 00:21:48.129 "auth": { 00:21:48.129 "state": "completed", 00:21:48.129 "digest": "sha384", 00:21:48.129 "dhgroup": "ffdhe4096" 00:21:48.129 } 00:21:48.129 } 00:21:48.129 ]' 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.129 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.390 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:48.390 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:48.961 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.961 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.961 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.961 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:49.221 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.222 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.222 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.222 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.222 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.222 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.222 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.222 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.482 00:21:49.482 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.482 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.482 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.742 { 00:21:49.742 "cntlid": 77, 00:21:49.742 "qid": 0, 00:21:49.742 "state": "enabled", 00:21:49.742 "thread": "nvmf_tgt_poll_group_000", 00:21:49.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:49.742 "listen_address": { 00:21:49.742 "trtype": "TCP", 00:21:49.742 "adrfam": "IPv4", 00:21:49.742 "traddr": "10.0.0.2", 00:21:49.742 "trsvcid": "4420" 00:21:49.742 }, 00:21:49.742 "peer_address": { 00:21:49.742 "trtype": "TCP", 00:21:49.742 "adrfam": "IPv4", 00:21:49.742 "traddr": "10.0.0.1", 00:21:49.742 "trsvcid": "52478" 00:21:49.742 }, 00:21:49.742 "auth": { 00:21:49.742 "state": "completed", 00:21:49.742 "digest": "sha384", 00:21:49.742 "dhgroup": "ffdhe4096" 00:21:49.742 } 00:21:49.742 } 00:21:49.742 ]' 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.742 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.002 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:50.002 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:50.583 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.843 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.103 00:21:51.103 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.103 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.103 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.363 { 00:21:51.363 "cntlid": 79, 00:21:51.363 "qid": 0, 00:21:51.363 "state": "enabled", 00:21:51.363 "thread": "nvmf_tgt_poll_group_000", 00:21:51.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:51.363 "listen_address": { 00:21:51.363 "trtype": "TCP", 00:21:51.363 "adrfam": "IPv4", 00:21:51.363 "traddr": "10.0.0.2", 00:21:51.363 "trsvcid": "4420" 00:21:51.363 }, 00:21:51.363 "peer_address": { 00:21:51.363 "trtype": "TCP", 00:21:51.363 "adrfam": "IPv4", 00:21:51.363 "traddr": "10.0.0.1", 00:21:51.363 "trsvcid": "56478" 00:21:51.363 }, 00:21:51.363 "auth": { 00:21:51.363 "state": "completed", 00:21:51.363 "digest": "sha384", 00:21:51.363 "dhgroup": "ffdhe4096" 00:21:51.363 } 00:21:51.363 } 00:21:51.363 ]' 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.363 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.624 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.624 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.624 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.624 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:51.624 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:52.196 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.457 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.030 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.030 { 00:21:53.030 "cntlid": 81, 00:21:53.030 "qid": 0, 00:21:53.030 "state": "enabled", 00:21:53.030 "thread": "nvmf_tgt_poll_group_000", 00:21:53.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:53.030 "listen_address": { 00:21:53.030 "trtype": "TCP", 00:21:53.030 "adrfam": "IPv4", 00:21:53.030 "traddr": "10.0.0.2", 00:21:53.030 "trsvcid": "4420" 00:21:53.030 }, 00:21:53.030 "peer_address": { 00:21:53.030 "trtype": "TCP", 00:21:53.030 "adrfam": "IPv4", 00:21:53.030 "traddr": "10.0.0.1", 00:21:53.030 "trsvcid": "56496" 00:21:53.030 }, 00:21:53.030 "auth": { 00:21:53.030 "state": "completed", 00:21:53.030 "digest": "sha384", 00:21:53.030 "dhgroup": "ffdhe6144" 00:21:53.030 } 00:21:53.030 } 00:21:53.030 ]' 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.030 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.291 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.291 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.291 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.291 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.291 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.291 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:53.291 15:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.232 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.493 00:21:54.493 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.493 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.493 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.754 { 00:21:54.754 "cntlid": 83, 00:21:54.754 "qid": 0, 00:21:54.754 "state": "enabled", 00:21:54.754 "thread": "nvmf_tgt_poll_group_000", 00:21:54.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.754 "listen_address": { 00:21:54.754 "trtype": "TCP", 00:21:54.754 "adrfam": "IPv4", 00:21:54.754 "traddr": "10.0.0.2", 00:21:54.754 "trsvcid": "4420" 00:21:54.754 }, 00:21:54.754 "peer_address": { 00:21:54.754 "trtype": "TCP", 00:21:54.754 "adrfam": "IPv4", 00:21:54.754 "traddr": "10.0.0.1", 00:21:54.754 "trsvcid": "56536" 00:21:54.754 }, 00:21:54.754 "auth": { 00:21:54.754 "state": "completed", 00:21:54.754 "digest": "sha384", 00:21:54.754 "dhgroup": "ffdhe6144" 00:21:54.754 } 00:21:54.754 } 00:21:54.754 ]' 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.754 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.015 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.015 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.015 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.015 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:55.015 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.958 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.219 00:21:56.219 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.219 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.219 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.479 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.479 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.479 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.479 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.479 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.479 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.479 { 00:21:56.479 "cntlid": 85, 00:21:56.479 "qid": 0, 00:21:56.479 "state": "enabled", 00:21:56.479 "thread": "nvmf_tgt_poll_group_000", 00:21:56.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:56.480 "listen_address": { 00:21:56.480 "trtype": "TCP", 00:21:56.480 "adrfam": "IPv4", 00:21:56.480 "traddr": "10.0.0.2", 00:21:56.480 "trsvcid": "4420" 00:21:56.480 }, 00:21:56.480 "peer_address": { 00:21:56.480 "trtype": "TCP", 00:21:56.480 "adrfam": "IPv4", 00:21:56.480 "traddr": "10.0.0.1", 00:21:56.480 "trsvcid": "56558" 00:21:56.480 }, 00:21:56.480 "auth": { 00:21:56.480 "state": "completed", 00:21:56.480 "digest": "sha384", 00:21:56.480 "dhgroup": "ffdhe6144" 00:21:56.480 } 00:21:56.480 } 00:21:56.480 ]' 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.480 15:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.740 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:56.740 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:21:57.311 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.572 15:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.834 00:21:58.094 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.094 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.095 { 00:21:58.095 "cntlid": 87, 00:21:58.095 "qid": 0, 00:21:58.095 "state": "enabled", 00:21:58.095 "thread": "nvmf_tgt_poll_group_000", 00:21:58.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:58.095 "listen_address": { 00:21:58.095 "trtype": "TCP", 00:21:58.095 "adrfam": "IPv4", 00:21:58.095 "traddr": "10.0.0.2", 00:21:58.095 "trsvcid": "4420" 00:21:58.095 }, 00:21:58.095 "peer_address": { 00:21:58.095 "trtype": "TCP", 00:21:58.095 "adrfam": "IPv4", 00:21:58.095 "traddr": "10.0.0.1", 00:21:58.095 "trsvcid": "56580" 00:21:58.095 }, 00:21:58.095 "auth": { 00:21:58.095 "state": "completed", 00:21:58.095 "digest": "sha384", 00:21:58.095 "dhgroup": "ffdhe6144" 00:21:58.095 } 00:21:58.095 } 00:21:58.095 ]' 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.095 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.355 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:58.355 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.355 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.355 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.355 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.616 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:58.616 15:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.189 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.449 15:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.710 00:21:59.710 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.710 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.710 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.973 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.973 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.973 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.973 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.973 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.973 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.973 { 00:21:59.973 "cntlid": 89, 00:21:59.973 "qid": 0, 00:21:59.974 "state": "enabled", 00:21:59.974 "thread": "nvmf_tgt_poll_group_000", 00:21:59.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.974 "listen_address": { 00:21:59.974 "trtype": "TCP", 00:21:59.974 "adrfam": "IPv4", 00:21:59.974 "traddr": "10.0.0.2", 00:21:59.974 "trsvcid": "4420" 00:21:59.974 }, 00:21:59.974 "peer_address": { 00:21:59.974 "trtype": "TCP", 00:21:59.974 "adrfam": "IPv4", 00:21:59.974 "traddr": "10.0.0.1", 00:21:59.974 "trsvcid": "59382" 00:21:59.974 }, 00:21:59.974 "auth": { 00:21:59.974 "state": "completed", 00:21:59.974 "digest": "sha384", 00:21:59.974 "dhgroup": "ffdhe8192" 00:21:59.974 } 00:21:59.974 } 00:21:59.974 ]' 00:21:59.974 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.974 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.974 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.974 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.974 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.234 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.234 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.234 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.234 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:00.234 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.177 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.749 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.749 { 00:22:01.749 "cntlid": 91, 00:22:01.749 "qid": 0, 00:22:01.749 "state": "enabled", 00:22:01.749 "thread": "nvmf_tgt_poll_group_000", 00:22:01.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:01.749 "listen_address": { 00:22:01.749 "trtype": "TCP", 00:22:01.749 "adrfam": "IPv4", 00:22:01.749 "traddr": "10.0.0.2", 00:22:01.749 "trsvcid": "4420" 00:22:01.749 }, 00:22:01.749 "peer_address": { 00:22:01.749 "trtype": "TCP", 00:22:01.749 "adrfam": "IPv4", 00:22:01.749 "traddr": "10.0.0.1", 00:22:01.749 "trsvcid": "59402" 00:22:01.749 }, 00:22:01.749 "auth": { 00:22:01.749 "state": "completed", 00:22:01.749 "digest": "sha384", 00:22:01.749 "dhgroup": "ffdhe8192" 00:22:01.749 } 00:22:01.749 } 00:22:01.749 ]' 00:22:01.749 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.010 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.010 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.010 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.010 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.010 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.010 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.010 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.270 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:02.271 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:02.841 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.101 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.102 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.672 00:22:03.673 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.673 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.673 15:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.673 { 00:22:03.673 "cntlid": 93, 00:22:03.673 "qid": 0, 00:22:03.673 "state": "enabled", 00:22:03.673 "thread": "nvmf_tgt_poll_group_000", 00:22:03.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:03.673 "listen_address": { 00:22:03.673 "trtype": "TCP", 00:22:03.673 "adrfam": "IPv4", 00:22:03.673 "traddr": "10.0.0.2", 00:22:03.673 "trsvcid": "4420" 00:22:03.673 }, 00:22:03.673 "peer_address": { 00:22:03.673 "trtype": "TCP", 00:22:03.673 "adrfam": "IPv4", 00:22:03.673 "traddr": "10.0.0.1", 00:22:03.673 "trsvcid": "59430" 00:22:03.673 }, 00:22:03.673 "auth": { 00:22:03.673 "state": "completed", 00:22:03.673 "digest": "sha384", 00:22:03.673 "dhgroup": "ffdhe8192" 00:22:03.673 } 00:22:03.673 } 00:22:03.673 ]' 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.673 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.933 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.933 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.933 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.933 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.933 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.933 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:03.933 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.873 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.445 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.445 { 00:22:05.445 "cntlid": 95, 00:22:05.445 "qid": 0, 00:22:05.445 "state": "enabled", 00:22:05.445 "thread": "nvmf_tgt_poll_group_000", 00:22:05.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:05.445 "listen_address": { 00:22:05.445 "trtype": "TCP", 00:22:05.445 "adrfam": "IPv4", 00:22:05.445 "traddr": "10.0.0.2", 00:22:05.445 "trsvcid": "4420" 00:22:05.445 }, 00:22:05.445 "peer_address": { 00:22:05.445 "trtype": "TCP", 00:22:05.445 "adrfam": "IPv4", 00:22:05.445 "traddr": "10.0.0.1", 00:22:05.445 "trsvcid": "59444" 00:22:05.445 }, 00:22:05.445 "auth": { 00:22:05.445 "state": "completed", 00:22:05.445 "digest": "sha384", 00:22:05.445 "dhgroup": "ffdhe8192" 00:22:05.445 } 00:22:05.445 } 00:22:05.445 ]' 00:22:05.445 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.705 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.705 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.705 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.705 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.705 15:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.705 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.705 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.966 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:05.966 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:06.538 15:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.799 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.060 00:22:07.060 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.060 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.060 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.321 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.321 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.321 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.321 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.321 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.322 { 00:22:07.322 "cntlid": 97, 00:22:07.322 "qid": 0, 00:22:07.322 "state": "enabled", 00:22:07.322 "thread": "nvmf_tgt_poll_group_000", 00:22:07.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:07.322 "listen_address": { 00:22:07.322 "trtype": "TCP", 00:22:07.322 "adrfam": "IPv4", 00:22:07.322 "traddr": "10.0.0.2", 00:22:07.322 "trsvcid": "4420" 00:22:07.322 }, 00:22:07.322 "peer_address": { 00:22:07.322 "trtype": "TCP", 00:22:07.322 "adrfam": "IPv4", 00:22:07.322 "traddr": "10.0.0.1", 00:22:07.322 "trsvcid": "59472" 00:22:07.322 }, 00:22:07.322 "auth": { 00:22:07.322 "state": "completed", 00:22:07.322 "digest": "sha512", 00:22:07.322 "dhgroup": "null" 00:22:07.322 } 00:22:07.322 } 00:22:07.322 ]' 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.322 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.582 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:07.582 15:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:08.153 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.414 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.675 00:22:08.675 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.675 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.675 15:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.936 { 00:22:08.936 "cntlid": 99, 00:22:08.936 "qid": 0, 00:22:08.936 "state": "enabled", 00:22:08.936 "thread": "nvmf_tgt_poll_group_000", 00:22:08.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:08.936 "listen_address": { 00:22:08.936 "trtype": "TCP", 00:22:08.936 "adrfam": "IPv4", 00:22:08.936 "traddr": "10.0.0.2", 00:22:08.936 "trsvcid": "4420" 00:22:08.936 }, 00:22:08.936 "peer_address": { 00:22:08.936 "trtype": "TCP", 00:22:08.936 "adrfam": "IPv4", 00:22:08.936 "traddr": "10.0.0.1", 00:22:08.936 "trsvcid": "59510" 00:22:08.936 }, 00:22:08.936 "auth": { 00:22:08.936 "state": "completed", 00:22:08.936 "digest": "sha512", 00:22:08.936 "dhgroup": "null" 00:22:08.936 } 00:22:08.936 } 00:22:08.936 ]' 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.936 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.196 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:09.196 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:09.767 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:10.027 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:10.027 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.027 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.027 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:10.027 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.027 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.027 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.028 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.028 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.028 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.028 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.028 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.028 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.289 00:22:10.289 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.289 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.289 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.550 { 00:22:10.550 "cntlid": 101, 00:22:10.550 "qid": 0, 00:22:10.550 "state": "enabled", 00:22:10.550 "thread": "nvmf_tgt_poll_group_000", 00:22:10.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:10.550 "listen_address": { 00:22:10.550 "trtype": "TCP", 00:22:10.550 "adrfam": "IPv4", 00:22:10.550 "traddr": "10.0.0.2", 00:22:10.550 "trsvcid": "4420" 00:22:10.550 }, 00:22:10.550 "peer_address": { 00:22:10.550 "trtype": "TCP", 00:22:10.550 "adrfam": "IPv4", 00:22:10.550 "traddr": "10.0.0.1", 00:22:10.550 "trsvcid": "44712" 00:22:10.550 }, 00:22:10.550 "auth": { 00:22:10.550 "state": "completed", 00:22:10.550 "digest": "sha512", 00:22:10.550 "dhgroup": "null" 00:22:10.550 } 00:22:10.550 } 00:22:10.550 ]' 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.550 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.811 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:10.811 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.382 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.643 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.904 00:22:11.904 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.904 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.904 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.166 { 00:22:12.166 "cntlid": 103, 00:22:12.166 "qid": 0, 00:22:12.166 "state": "enabled", 00:22:12.166 "thread": "nvmf_tgt_poll_group_000", 00:22:12.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:12.166 "listen_address": { 00:22:12.166 "trtype": "TCP", 00:22:12.166 "adrfam": "IPv4", 00:22:12.166 "traddr": "10.0.0.2", 00:22:12.166 "trsvcid": "4420" 00:22:12.166 }, 00:22:12.166 "peer_address": { 00:22:12.166 "trtype": "TCP", 00:22:12.166 "adrfam": "IPv4", 00:22:12.166 "traddr": "10.0.0.1", 00:22:12.166 "trsvcid": "44736" 00:22:12.166 }, 00:22:12.166 "auth": { 00:22:12.166 "state": "completed", 00:22:12.166 "digest": "sha512", 00:22:12.166 "dhgroup": "null" 00:22:12.166 } 00:22:12.166 } 00:22:12.166 ]' 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.166 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.426 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:12.426 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:12.997 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.257 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.518 00:22:13.518 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.518 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.518 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.779 { 00:22:13.779 "cntlid": 105, 00:22:13.779 "qid": 0, 00:22:13.779 "state": "enabled", 00:22:13.779 "thread": "nvmf_tgt_poll_group_000", 00:22:13.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:13.779 "listen_address": { 00:22:13.779 "trtype": "TCP", 00:22:13.779 "adrfam": "IPv4", 00:22:13.779 "traddr": "10.0.0.2", 00:22:13.779 "trsvcid": "4420" 00:22:13.779 }, 00:22:13.779 "peer_address": { 00:22:13.779 "trtype": "TCP", 00:22:13.779 "adrfam": "IPv4", 00:22:13.779 "traddr": "10.0.0.1", 00:22:13.779 "trsvcid": "44764" 00:22:13.779 }, 00:22:13.779 "auth": { 00:22:13.779 "state": "completed", 00:22:13.779 "digest": "sha512", 00:22:13.779 "dhgroup": "ffdhe2048" 00:22:13.779 } 00:22:13.779 } 00:22:13.779 ]' 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.779 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.040 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:14.040 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:14.611 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.611 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.611 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.611 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.611 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.611 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.611 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:14.611 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.872 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.133 00:22:15.133 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.133 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.133 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.394 { 00:22:15.394 "cntlid": 107, 00:22:15.394 "qid": 0, 00:22:15.394 "state": "enabled", 00:22:15.394 "thread": "nvmf_tgt_poll_group_000", 00:22:15.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:15.394 "listen_address": { 00:22:15.394 "trtype": "TCP", 00:22:15.394 "adrfam": "IPv4", 00:22:15.394 "traddr": "10.0.0.2", 00:22:15.394 "trsvcid": "4420" 00:22:15.394 }, 00:22:15.394 "peer_address": { 00:22:15.394 "trtype": "TCP", 00:22:15.394 "adrfam": "IPv4", 00:22:15.394 "traddr": "10.0.0.1", 00:22:15.394 "trsvcid": "44784" 00:22:15.394 }, 00:22:15.394 "auth": { 00:22:15.394 "state": "completed", 00:22:15.394 "digest": "sha512", 00:22:15.394 "dhgroup": "ffdhe2048" 00:22:15.394 } 00:22:15.394 } 00:22:15.394 ]' 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.394 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.655 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:15.655 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.226 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.487 15:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.748 00:22:16.748 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.748 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.748 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.009 { 00:22:17.009 "cntlid": 109, 00:22:17.009 "qid": 0, 00:22:17.009 "state": "enabled", 00:22:17.009 "thread": "nvmf_tgt_poll_group_000", 00:22:17.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:17.009 "listen_address": { 00:22:17.009 "trtype": "TCP", 00:22:17.009 "adrfam": "IPv4", 00:22:17.009 "traddr": "10.0.0.2", 00:22:17.009 "trsvcid": "4420" 00:22:17.009 }, 00:22:17.009 "peer_address": { 00:22:17.009 "trtype": "TCP", 00:22:17.009 "adrfam": "IPv4", 00:22:17.009 "traddr": "10.0.0.1", 00:22:17.009 "trsvcid": "44816" 00:22:17.009 }, 00:22:17.009 "auth": { 00:22:17.009 "state": "completed", 00:22:17.009 "digest": "sha512", 00:22:17.009 "dhgroup": "ffdhe2048" 00:22:17.009 } 00:22:17.009 } 00:22:17.009 ]' 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.009 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.270 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:17.270 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:17.842 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:18.102 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:18.102 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.102 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.102 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:18.102 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.103 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.363 00:22:18.363 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.363 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.363 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.624 { 00:22:18.624 "cntlid": 111, 00:22:18.624 "qid": 0, 00:22:18.624 "state": "enabled", 00:22:18.624 "thread": "nvmf_tgt_poll_group_000", 00:22:18.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:18.624 "listen_address": { 00:22:18.624 "trtype": "TCP", 00:22:18.624 "adrfam": "IPv4", 00:22:18.624 "traddr": "10.0.0.2", 00:22:18.624 "trsvcid": "4420" 00:22:18.624 }, 00:22:18.624 "peer_address": { 00:22:18.624 "trtype": "TCP", 00:22:18.624 "adrfam": "IPv4", 00:22:18.624 "traddr": "10.0.0.1", 00:22:18.624 "trsvcid": "44834" 00:22:18.624 }, 00:22:18.624 "auth": { 00:22:18.624 "state": "completed", 00:22:18.624 "digest": "sha512", 00:22:18.624 "dhgroup": "ffdhe2048" 00:22:18.624 } 00:22:18.624 } 00:22:18.624 ]' 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.624 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.624 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.624 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.624 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.885 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:18.885 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:19.456 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.717 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.977 00:22:19.977 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.977 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.977 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.238 { 00:22:20.238 "cntlid": 113, 00:22:20.238 "qid": 0, 00:22:20.238 "state": "enabled", 00:22:20.238 "thread": "nvmf_tgt_poll_group_000", 00:22:20.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:20.238 "listen_address": { 00:22:20.238 "trtype": "TCP", 00:22:20.238 "adrfam": "IPv4", 00:22:20.238 "traddr": "10.0.0.2", 00:22:20.238 "trsvcid": "4420" 00:22:20.238 }, 00:22:20.238 "peer_address": { 00:22:20.238 "trtype": "TCP", 00:22:20.238 "adrfam": "IPv4", 00:22:20.238 "traddr": "10.0.0.1", 00:22:20.238 "trsvcid": "34936" 00:22:20.238 }, 00:22:20.238 "auth": { 00:22:20.238 "state": "completed", 00:22:20.238 "digest": "sha512", 00:22:20.238 "dhgroup": "ffdhe3072" 00:22:20.238 } 00:22:20.238 } 00:22:20.238 ]' 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.238 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.499 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:20.499 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:21.093 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.354 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.615 00:22:21.615 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.615 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.615 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.876 { 00:22:21.876 "cntlid": 115, 00:22:21.876 "qid": 0, 00:22:21.876 "state": "enabled", 00:22:21.876 "thread": "nvmf_tgt_poll_group_000", 00:22:21.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:21.876 "listen_address": { 00:22:21.876 "trtype": "TCP", 00:22:21.876 "adrfam": "IPv4", 00:22:21.876 "traddr": "10.0.0.2", 00:22:21.876 "trsvcid": "4420" 00:22:21.876 }, 00:22:21.876 "peer_address": { 00:22:21.876 "trtype": "TCP", 00:22:21.876 "adrfam": "IPv4", 00:22:21.876 "traddr": "10.0.0.1", 00:22:21.876 "trsvcid": "34958" 00:22:21.876 }, 00:22:21.876 "auth": { 00:22:21.876 "state": "completed", 00:22:21.876 "digest": "sha512", 00:22:21.876 "dhgroup": "ffdhe3072" 00:22:21.876 } 00:22:21.876 } 00:22:21.876 ]' 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.876 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.136 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:22.136 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:22.708 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.709 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.709 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.709 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.709 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.709 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.709 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:22.709 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.970 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.231 00:22:23.231 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.231 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.231 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.491 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.491 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.491 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.491 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.491 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.491 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.491 { 00:22:23.491 "cntlid": 117, 00:22:23.491 "qid": 0, 00:22:23.491 "state": "enabled", 00:22:23.491 "thread": "nvmf_tgt_poll_group_000", 00:22:23.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:23.491 "listen_address": { 00:22:23.491 "trtype": "TCP", 00:22:23.491 "adrfam": "IPv4", 00:22:23.491 "traddr": "10.0.0.2", 00:22:23.491 "trsvcid": "4420" 00:22:23.491 }, 00:22:23.491 "peer_address": { 00:22:23.491 "trtype": "TCP", 00:22:23.491 "adrfam": "IPv4", 00:22:23.491 "traddr": "10.0.0.1", 00:22:23.491 "trsvcid": "34988" 00:22:23.491 }, 00:22:23.491 "auth": { 00:22:23.491 "state": "completed", 00:22:23.491 "digest": "sha512", 00:22:23.491 "dhgroup": "ffdhe3072" 00:22:23.491 } 00:22:23.491 } 00:22:23.492 ]' 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.492 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.752 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:23.752 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:24.322 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.322 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.322 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.322 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.583 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.583 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.583 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:24.583 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.584 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.844 00:22:24.844 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.844 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.844 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.105 { 00:22:25.105 "cntlid": 119, 00:22:25.105 "qid": 0, 00:22:25.105 "state": "enabled", 00:22:25.105 "thread": "nvmf_tgt_poll_group_000", 00:22:25.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:25.105 "listen_address": { 00:22:25.105 "trtype": "TCP", 00:22:25.105 "adrfam": "IPv4", 00:22:25.105 "traddr": "10.0.0.2", 00:22:25.105 "trsvcid": "4420" 00:22:25.105 }, 00:22:25.105 "peer_address": { 00:22:25.105 "trtype": "TCP", 00:22:25.105 "adrfam": "IPv4", 00:22:25.105 "traddr": "10.0.0.1", 00:22:25.105 "trsvcid": "35022" 00:22:25.105 }, 00:22:25.105 "auth": { 00:22:25.105 "state": "completed", 00:22:25.105 "digest": "sha512", 00:22:25.105 "dhgroup": "ffdhe3072" 00:22:25.105 } 00:22:25.105 } 00:22:25.105 ]' 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.105 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.366 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:25.366 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:25.938 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.200 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.460 00:22:26.460 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.460 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.460 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.721 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.721 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.721 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.721 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.721 { 00:22:26.721 "cntlid": 121, 00:22:26.721 "qid": 0, 00:22:26.721 "state": "enabled", 00:22:26.721 "thread": "nvmf_tgt_poll_group_000", 00:22:26.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:26.721 "listen_address": { 00:22:26.721 "trtype": "TCP", 00:22:26.721 "adrfam": "IPv4", 00:22:26.721 "traddr": "10.0.0.2", 00:22:26.721 "trsvcid": "4420" 00:22:26.721 }, 00:22:26.721 "peer_address": { 00:22:26.721 "trtype": "TCP", 00:22:26.721 "adrfam": "IPv4", 00:22:26.721 "traddr": "10.0.0.1", 00:22:26.721 "trsvcid": "35058" 00:22:26.721 }, 00:22:26.721 "auth": { 00:22:26.721 "state": "completed", 00:22:26.721 "digest": "sha512", 00:22:26.721 "dhgroup": "ffdhe4096" 00:22:26.721 } 00:22:26.721 } 00:22:26.721 ]' 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.721 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.982 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:26.982 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:27.554 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.816 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.077 00:22:28.077 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.077 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.077 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.338 { 00:22:28.338 "cntlid": 123, 00:22:28.338 "qid": 0, 00:22:28.338 "state": "enabled", 00:22:28.338 "thread": "nvmf_tgt_poll_group_000", 00:22:28.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:28.338 "listen_address": { 00:22:28.338 "trtype": "TCP", 00:22:28.338 "adrfam": "IPv4", 00:22:28.338 "traddr": "10.0.0.2", 00:22:28.338 "trsvcid": "4420" 00:22:28.338 }, 00:22:28.338 "peer_address": { 00:22:28.338 "trtype": "TCP", 00:22:28.338 "adrfam": "IPv4", 00:22:28.338 "traddr": "10.0.0.1", 00:22:28.338 "trsvcid": "35086" 00:22:28.338 }, 00:22:28.338 "auth": { 00:22:28.338 "state": "completed", 00:22:28.338 "digest": "sha512", 00:22:28.338 "dhgroup": "ffdhe4096" 00:22:28.338 } 00:22:28.338 } 00:22:28.338 ]' 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.338 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.598 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:28.598 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:29.169 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.429 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.696 00:22:29.696 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.696 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.696 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.960 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.960 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.960 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.960 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.960 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.960 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.960 { 00:22:29.960 "cntlid": 125, 00:22:29.960 "qid": 0, 00:22:29.960 "state": "enabled", 00:22:29.960 "thread": "nvmf_tgt_poll_group_000", 00:22:29.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:29.960 "listen_address": { 00:22:29.960 "trtype": "TCP", 00:22:29.960 "adrfam": "IPv4", 00:22:29.960 "traddr": "10.0.0.2", 00:22:29.960 "trsvcid": "4420" 00:22:29.960 }, 00:22:29.960 "peer_address": { 00:22:29.960 "trtype": "TCP", 00:22:29.960 "adrfam": "IPv4", 00:22:29.960 "traddr": "10.0.0.1", 00:22:29.960 "trsvcid": "58214" 00:22:29.960 }, 00:22:29.960 "auth": { 00:22:29.960 "state": "completed", 00:22:29.960 "digest": "sha512", 00:22:29.960 "dhgroup": "ffdhe4096" 00:22:29.960 } 00:22:29.960 } 00:22:29.960 ]' 00:22:29.960 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.961 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.961 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.961 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:29.961 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.961 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.961 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.961 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.222 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:30.222 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:30.793 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.054 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.315 00:22:31.315 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.315 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.315 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.576 { 00:22:31.576 "cntlid": 127, 00:22:31.576 "qid": 0, 00:22:31.576 "state": "enabled", 00:22:31.576 "thread": "nvmf_tgt_poll_group_000", 00:22:31.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:31.576 "listen_address": { 00:22:31.576 "trtype": "TCP", 00:22:31.576 "adrfam": "IPv4", 00:22:31.576 "traddr": "10.0.0.2", 00:22:31.576 "trsvcid": "4420" 00:22:31.576 }, 00:22:31.576 "peer_address": { 00:22:31.576 "trtype": "TCP", 00:22:31.576 "adrfam": "IPv4", 00:22:31.576 "traddr": "10.0.0.1", 00:22:31.576 "trsvcid": "58248" 00:22:31.576 }, 00:22:31.576 "auth": { 00:22:31.576 "state": "completed", 00:22:31.576 "digest": "sha512", 00:22:31.576 "dhgroup": "ffdhe4096" 00:22:31.576 } 00:22:31.576 } 00:22:31.576 ]' 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:31.576 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.836 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.836 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.836 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.836 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:31.836 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:32.407 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.667 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.667 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.667 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.668 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.668 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.668 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.668 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.668 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.668 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.238 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.238 { 00:22:33.238 "cntlid": 129, 00:22:33.238 "qid": 0, 00:22:33.238 "state": "enabled", 00:22:33.238 "thread": "nvmf_tgt_poll_group_000", 00:22:33.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:33.238 "listen_address": { 00:22:33.238 "trtype": "TCP", 00:22:33.238 "adrfam": "IPv4", 00:22:33.238 "traddr": "10.0.0.2", 00:22:33.238 "trsvcid": "4420" 00:22:33.238 }, 00:22:33.238 "peer_address": { 00:22:33.238 "trtype": "TCP", 00:22:33.238 "adrfam": "IPv4", 00:22:33.238 "traddr": "10.0.0.1", 00:22:33.238 "trsvcid": "58282" 00:22:33.238 }, 00:22:33.238 "auth": { 00:22:33.238 "state": "completed", 00:22:33.238 "digest": "sha512", 00:22:33.238 "dhgroup": "ffdhe6144" 00:22:33.238 } 00:22:33.238 } 00:22:33.238 ]' 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.238 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.499 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:33.499 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.499 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.499 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.499 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.499 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:33.499 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.439 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.700 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.700 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.700 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.700 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.960 00:22:34.960 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.960 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.960 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.222 { 00:22:35.222 "cntlid": 131, 00:22:35.222 "qid": 0, 00:22:35.222 "state": "enabled", 00:22:35.222 "thread": "nvmf_tgt_poll_group_000", 00:22:35.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:35.222 "listen_address": { 00:22:35.222 "trtype": "TCP", 00:22:35.222 "adrfam": "IPv4", 00:22:35.222 "traddr": "10.0.0.2", 00:22:35.222 "trsvcid": "4420" 00:22:35.222 }, 00:22:35.222 "peer_address": { 00:22:35.222 "trtype": "TCP", 00:22:35.222 "adrfam": "IPv4", 00:22:35.222 "traddr": "10.0.0.1", 00:22:35.222 "trsvcid": "58306" 00:22:35.222 }, 00:22:35.222 "auth": { 00:22:35.222 "state": "completed", 00:22:35.222 "digest": "sha512", 00:22:35.222 "dhgroup": "ffdhe6144" 00:22:35.222 } 00:22:35.222 } 00:22:35.222 ]' 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.222 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.483 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:35.483 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:36.052 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.311 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.572 00:22:36.572 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.572 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.572 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.832 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.832 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.832 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.832 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.832 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.832 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.832 { 00:22:36.832 "cntlid": 133, 00:22:36.832 "qid": 0, 00:22:36.832 "state": "enabled", 00:22:36.832 "thread": "nvmf_tgt_poll_group_000", 00:22:36.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:36.832 "listen_address": { 00:22:36.832 "trtype": "TCP", 00:22:36.832 "adrfam": "IPv4", 00:22:36.832 "traddr": "10.0.0.2", 00:22:36.832 "trsvcid": "4420" 00:22:36.832 }, 00:22:36.832 "peer_address": { 00:22:36.832 "trtype": "TCP", 00:22:36.832 "adrfam": "IPv4", 00:22:36.832 "traddr": "10.0.0.1", 00:22:36.832 "trsvcid": "58332" 00:22:36.832 }, 00:22:36.832 "auth": { 00:22:36.833 "state": "completed", 00:22:36.833 "digest": "sha512", 00:22:36.833 "dhgroup": "ffdhe6144" 00:22:36.833 } 00:22:36.833 } 00:22:36.833 ]' 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.833 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.093 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:37.093 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.664 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:37.925 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.926 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.926 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.926 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:37.926 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.926 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.187 00:22:38.187 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.187 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.187 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.448 { 00:22:38.448 "cntlid": 135, 00:22:38.448 "qid": 0, 00:22:38.448 "state": "enabled", 00:22:38.448 "thread": "nvmf_tgt_poll_group_000", 00:22:38.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:38.448 "listen_address": { 00:22:38.448 "trtype": "TCP", 00:22:38.448 "adrfam": "IPv4", 00:22:38.448 "traddr": "10.0.0.2", 00:22:38.448 "trsvcid": "4420" 00:22:38.448 }, 00:22:38.448 "peer_address": { 00:22:38.448 "trtype": "TCP", 00:22:38.448 "adrfam": "IPv4", 00:22:38.448 "traddr": "10.0.0.1", 00:22:38.448 "trsvcid": "58370" 00:22:38.448 }, 00:22:38.448 "auth": { 00:22:38.448 "state": "completed", 00:22:38.448 "digest": "sha512", 00:22:38.448 "dhgroup": "ffdhe6144" 00:22:38.448 } 00:22:38.448 } 00:22:38.448 ]' 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.448 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.708 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:38.709 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.709 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.709 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.709 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.709 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:38.709 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.647 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.218 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.218 { 00:22:40.218 "cntlid": 137, 00:22:40.218 "qid": 0, 00:22:40.218 "state": "enabled", 00:22:40.218 "thread": "nvmf_tgt_poll_group_000", 00:22:40.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:40.218 "listen_address": { 00:22:40.218 "trtype": "TCP", 00:22:40.218 "adrfam": "IPv4", 00:22:40.218 "traddr": "10.0.0.2", 00:22:40.218 "trsvcid": "4420" 00:22:40.218 }, 00:22:40.218 "peer_address": { 00:22:40.218 "trtype": "TCP", 00:22:40.218 "adrfam": "IPv4", 00:22:40.218 "traddr": "10.0.0.1", 00:22:40.218 "trsvcid": "59220" 00:22:40.218 }, 00:22:40.218 "auth": { 00:22:40.218 "state": "completed", 00:22:40.218 "digest": "sha512", 00:22:40.218 "dhgroup": "ffdhe8192" 00:22:40.218 } 00:22:40.218 } 00:22:40.218 ]' 00:22:40.218 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.479 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.479 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.479 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.479 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.479 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.479 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.479 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.739 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:40.739 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.309 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.570 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.830 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.091 { 00:22:42.091 "cntlid": 139, 00:22:42.091 "qid": 0, 00:22:42.091 "state": "enabled", 00:22:42.091 "thread": "nvmf_tgt_poll_group_000", 00:22:42.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:42.091 "listen_address": { 00:22:42.091 "trtype": "TCP", 00:22:42.091 "adrfam": "IPv4", 00:22:42.091 "traddr": "10.0.0.2", 00:22:42.091 "trsvcid": "4420" 00:22:42.091 }, 00:22:42.091 "peer_address": { 00:22:42.091 "trtype": "TCP", 00:22:42.091 "adrfam": "IPv4", 00:22:42.091 "traddr": "10.0.0.1", 00:22:42.091 "trsvcid": "59240" 00:22:42.091 }, 00:22:42.091 "auth": { 00:22:42.091 "state": "completed", 00:22:42.091 "digest": "sha512", 00:22:42.091 "dhgroup": "ffdhe8192" 00:22:42.091 } 00:22:42.091 } 00:22:42.091 ]' 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.091 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.351 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:42.351 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.351 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.351 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.351 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.351 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:42.351 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: --dhchap-ctrl-secret DHHC-1:02:OGU1MzFiNWNkZGI1YWNhYzU4YTQ0OWZlN2U0ZDE1M2ZlYjI4MmVlMjNiNjkwYjcwmEeucA==: 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.291 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.862 00:22:43.862 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.862 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.862 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.122 { 00:22:44.122 "cntlid": 141, 00:22:44.122 "qid": 0, 00:22:44.122 "state": "enabled", 00:22:44.122 "thread": "nvmf_tgt_poll_group_000", 00:22:44.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:44.122 "listen_address": { 00:22:44.122 "trtype": "TCP", 00:22:44.122 "adrfam": "IPv4", 00:22:44.122 "traddr": "10.0.0.2", 00:22:44.122 "trsvcid": "4420" 00:22:44.122 }, 00:22:44.122 "peer_address": { 00:22:44.122 "trtype": "TCP", 00:22:44.122 "adrfam": "IPv4", 00:22:44.122 "traddr": "10.0.0.1", 00:22:44.122 "trsvcid": "59260" 00:22:44.122 }, 00:22:44.122 "auth": { 00:22:44.122 "state": "completed", 00:22:44.122 "digest": "sha512", 00:22:44.122 "dhgroup": "ffdhe8192" 00:22:44.122 } 00:22:44.122 } 00:22:44.122 ]' 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.122 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.382 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:44.382 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:01:ZGU4YzU2NzllYTZlNzM2N2Y5NWZjMjk3ZGY3MDU2YjAlxWiU: 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:44.953 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.214 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.215 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.786 00:22:45.786 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.786 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.786 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.786 { 00:22:45.786 "cntlid": 143, 00:22:45.786 "qid": 0, 00:22:45.786 "state": "enabled", 00:22:45.786 "thread": "nvmf_tgt_poll_group_000", 00:22:45.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:45.786 "listen_address": { 00:22:45.786 "trtype": "TCP", 00:22:45.786 "adrfam": "IPv4", 00:22:45.786 "traddr": "10.0.0.2", 00:22:45.786 "trsvcid": "4420" 00:22:45.786 }, 00:22:45.786 "peer_address": { 00:22:45.786 "trtype": "TCP", 00:22:45.786 "adrfam": "IPv4", 00:22:45.786 "traddr": "10.0.0.1", 00:22:45.786 "trsvcid": "59298" 00:22:45.786 }, 00:22:45.786 "auth": { 00:22:45.786 "state": "completed", 00:22:45.786 "digest": "sha512", 00:22:45.786 "dhgroup": "ffdhe8192" 00:22:45.786 } 00:22:45.786 } 00:22:45.786 ]' 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.786 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.047 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:46.047 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.047 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.047 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.047 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.308 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:46.308 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.881 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:47.141 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:47.141 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.142 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.403 00:22:47.403 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.403 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.403 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.664 { 00:22:47.664 "cntlid": 145, 00:22:47.664 "qid": 0, 00:22:47.664 "state": "enabled", 00:22:47.664 "thread": "nvmf_tgt_poll_group_000", 00:22:47.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:47.664 "listen_address": { 00:22:47.664 "trtype": "TCP", 00:22:47.664 "adrfam": "IPv4", 00:22:47.664 "traddr": "10.0.0.2", 00:22:47.664 "trsvcid": "4420" 00:22:47.664 }, 00:22:47.664 "peer_address": { 00:22:47.664 "trtype": "TCP", 00:22:47.664 "adrfam": "IPv4", 00:22:47.664 "traddr": "10.0.0.1", 00:22:47.664 "trsvcid": "59322" 00:22:47.664 }, 00:22:47.664 "auth": { 00:22:47.664 "state": "completed", 00:22:47.664 "digest": "sha512", 00:22:47.664 "dhgroup": "ffdhe8192" 00:22:47.664 } 00:22:47.664 } 00:22:47.664 ]' 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.664 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.924 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:47.924 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.924 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.924 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.924 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.185 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:48.185 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NTRiY2MxNmY1M2UwYmI4N2ZkODk5ODk3Mzg0ZmU0ZjFhNWIyNjJlMTkzMGQ4NjA06mlI8w==: --dhchap-ctrl-secret DHHC-1:03:OGI4NWNhNThlN2YwZjc0OTk1MjU2MmU4NmU3ODMwOWI0M2ZkZTU0Y2U4ZTkyY2ZjZDkyYWVlMGFkNmViN2FkNqls0BI=: 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:48.756 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:49.327 request: 00:22:49.327 { 00:22:49.327 "name": "nvme0", 00:22:49.327 "trtype": "tcp", 00:22:49.327 "traddr": "10.0.0.2", 00:22:49.327 "adrfam": "ipv4", 00:22:49.327 "trsvcid": "4420", 00:22:49.327 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:49.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:49.327 "prchk_reftag": false, 00:22:49.327 "prchk_guard": false, 00:22:49.327 "hdgst": false, 00:22:49.327 "ddgst": false, 00:22:49.327 "dhchap_key": "key2", 00:22:49.327 "allow_unrecognized_csi": false, 00:22:49.327 "method": "bdev_nvme_attach_controller", 00:22:49.327 "req_id": 1 00:22:49.327 } 00:22:49.327 Got JSON-RPC error response 00:22:49.327 response: 00:22:49.327 { 00:22:49.327 "code": -5, 00:22:49.327 "message": "Input/output error" 00:22:49.327 } 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.327 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.587 request: 00:22:49.587 { 00:22:49.587 "name": "nvme0", 00:22:49.587 "trtype": "tcp", 00:22:49.587 "traddr": "10.0.0.2", 00:22:49.587 "adrfam": "ipv4", 00:22:49.587 "trsvcid": "4420", 00:22:49.587 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:49.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:49.587 "prchk_reftag": false, 00:22:49.587 "prchk_guard": false, 00:22:49.587 "hdgst": false, 00:22:49.587 "ddgst": false, 00:22:49.587 "dhchap_key": "key1", 00:22:49.587 "dhchap_ctrlr_key": "ckey2", 00:22:49.587 "allow_unrecognized_csi": false, 00:22:49.587 "method": "bdev_nvme_attach_controller", 00:22:49.587 "req_id": 1 00:22:49.587 } 00:22:49.587 Got JSON-RPC error response 00:22:49.587 response: 00:22:49.587 { 00:22:49.587 "code": -5, 00:22:49.587 "message": "Input/output error" 00:22:49.587 } 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.587 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.587 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.158 request: 00:22:50.158 { 00:22:50.158 "name": "nvme0", 00:22:50.158 "trtype": "tcp", 00:22:50.158 "traddr": "10.0.0.2", 00:22:50.158 "adrfam": "ipv4", 00:22:50.158 "trsvcid": "4420", 00:22:50.158 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:50.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:50.158 "prchk_reftag": false, 00:22:50.158 "prchk_guard": false, 00:22:50.158 "hdgst": false, 00:22:50.158 "ddgst": false, 00:22:50.158 "dhchap_key": "key1", 00:22:50.158 "dhchap_ctrlr_key": "ckey1", 00:22:50.158 "allow_unrecognized_csi": false, 00:22:50.158 "method": "bdev_nvme_attach_controller", 00:22:50.158 "req_id": 1 00:22:50.158 } 00:22:50.158 Got JSON-RPC error response 00:22:50.158 response: 00:22:50.158 { 00:22:50.158 "code": -5, 00:22:50.158 "message": "Input/output error" 00:22:50.158 } 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3136582 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3136582 ']' 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3136582 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3136582 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3136582' 00:22:50.158 killing process with pid 3136582 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3136582 00:22:50.158 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3136582 00:22:50.419 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:50.419 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:50.419 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.419 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.419 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3162977 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3162977 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3162977 ']' 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.420 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3162977 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3162977 ']' 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.361 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.361 null0 00:22:51.622 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.622 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:51.622 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XZ5 00:22:51.622 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.622 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.WlP ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WlP 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gp8 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.PAK ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PAK 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.g40 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.uYK ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uYK 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2Es 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.623 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.194 nvme0n1 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.454 { 00:22:52.454 "cntlid": 1, 00:22:52.454 "qid": 0, 00:22:52.454 "state": "enabled", 00:22:52.454 "thread": "nvmf_tgt_poll_group_000", 00:22:52.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:52.454 "listen_address": { 00:22:52.454 "trtype": "TCP", 00:22:52.454 "adrfam": "IPv4", 00:22:52.454 "traddr": "10.0.0.2", 00:22:52.454 "trsvcid": "4420" 00:22:52.454 }, 00:22:52.454 "peer_address": { 00:22:52.454 "trtype": "TCP", 00:22:52.454 "adrfam": "IPv4", 00:22:52.454 "traddr": "10.0.0.1", 00:22:52.454 "trsvcid": "50808" 00:22:52.454 }, 00:22:52.454 "auth": { 00:22:52.454 "state": "completed", 00:22:52.454 "digest": "sha512", 00:22:52.454 "dhgroup": "ffdhe8192" 00:22:52.454 } 00:22:52.454 } 00:22:52.454 ]' 00:22:52.454 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.714 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.714 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.714 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.714 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.714 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.714 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.714 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.974 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:52.974 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:53.545 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.807 request: 00:22:53.807 { 00:22:53.807 "name": "nvme0", 00:22:53.807 "trtype": "tcp", 00:22:53.807 "traddr": "10.0.0.2", 00:22:53.807 "adrfam": "ipv4", 00:22:53.807 "trsvcid": "4420", 00:22:53.807 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:53.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:53.807 "prchk_reftag": false, 00:22:53.807 "prchk_guard": false, 00:22:53.807 "hdgst": false, 00:22:53.807 "ddgst": false, 00:22:53.807 "dhchap_key": "key3", 00:22:53.807 "allow_unrecognized_csi": false, 00:22:53.807 "method": "bdev_nvme_attach_controller", 00:22:53.807 "req_id": 1 00:22:53.807 } 00:22:53.807 Got JSON-RPC error response 00:22:53.807 response: 00:22:53.807 { 00:22:53.807 "code": -5, 00:22:53.807 "message": "Input/output error" 00:22:53.807 } 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:53.807 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.067 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.328 request: 00:22:54.328 { 00:22:54.328 "name": "nvme0", 00:22:54.328 "trtype": "tcp", 00:22:54.328 "traddr": "10.0.0.2", 00:22:54.328 "adrfam": "ipv4", 00:22:54.328 "trsvcid": "4420", 00:22:54.328 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:54.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:54.328 "prchk_reftag": false, 00:22:54.328 "prchk_guard": false, 00:22:54.328 "hdgst": false, 00:22:54.328 "ddgst": false, 00:22:54.328 "dhchap_key": "key3", 00:22:54.328 "allow_unrecognized_csi": false, 00:22:54.328 "method": "bdev_nvme_attach_controller", 00:22:54.328 "req_id": 1 00:22:54.328 } 00:22:54.328 Got JSON-RPC error response 00:22:54.328 response: 00:22:54.328 { 00:22:54.328 "code": -5, 00:22:54.328 "message": "Input/output error" 00:22:54.328 } 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.328 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.588 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.588 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.588 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.588 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.589 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.850 request: 00:22:54.850 { 00:22:54.850 "name": "nvme0", 00:22:54.850 "trtype": "tcp", 00:22:54.850 "traddr": "10.0.0.2", 00:22:54.850 "adrfam": "ipv4", 00:22:54.850 "trsvcid": "4420", 00:22:54.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:54.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:54.850 "prchk_reftag": false, 00:22:54.850 "prchk_guard": false, 00:22:54.850 "hdgst": false, 00:22:54.850 "ddgst": false, 00:22:54.850 "dhchap_key": "key0", 00:22:54.850 "dhchap_ctrlr_key": "key1", 00:22:54.850 "allow_unrecognized_csi": false, 00:22:54.850 "method": "bdev_nvme_attach_controller", 00:22:54.850 "req_id": 1 00:22:54.850 } 00:22:54.850 Got JSON-RPC error response 00:22:54.850 response: 00:22:54.850 { 00:22:54.850 "code": -5, 00:22:54.850 "message": "Input/output error" 00:22:54.850 } 00:22:54.850 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:54.850 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.850 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:54.850 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.850 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:54.850 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:54.850 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:55.110 nvme0n1 00:22:55.110 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:55.110 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:55.110 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:55.370 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:56.308 nvme0n1 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:56.308 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.567 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.567 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:56.568 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: --dhchap-ctrl-secret DHHC-1:03:Yzk1ZDgyYTljYjhmNDZiZTMzMjQwYzU3NjE4MTIwYjY5ZDE0OWJhZGViZDdjZWQxNjY4ZWI5NWJjMjE1ZmY1Zbvp+Ws=: 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.137 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:57.397 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:57.966 request: 00:22:57.966 { 00:22:57.966 "name": "nvme0", 00:22:57.966 "trtype": "tcp", 00:22:57.966 "traddr": "10.0.0.2", 00:22:57.966 "adrfam": "ipv4", 00:22:57.966 "trsvcid": "4420", 00:22:57.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:57.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:57.966 "prchk_reftag": false, 00:22:57.966 "prchk_guard": false, 00:22:57.966 "hdgst": false, 00:22:57.966 "ddgst": false, 00:22:57.966 "dhchap_key": "key1", 00:22:57.966 "allow_unrecognized_csi": false, 00:22:57.966 "method": "bdev_nvme_attach_controller", 00:22:57.966 "req_id": 1 00:22:57.966 } 00:22:57.966 Got JSON-RPC error response 00:22:57.966 response: 00:22:57.966 { 00:22:57.966 "code": -5, 00:22:57.966 "message": "Input/output error" 00:22:57.966 } 00:22:57.966 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:57.966 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:57.966 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:57.966 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:57.966 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:57.966 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:57.966 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:58.533 nvme0n1 00:22:58.533 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:58.533 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:58.533 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.807 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.807 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.807 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:59.110 nvme0n1 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:59.110 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.393 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.393 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.393 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: '' 2s 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: ]] 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWIwMDU0NDg3MTU0ZmRkMGFmYzE2YzJmNjFlYmYxOWYg1wvC: 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:59.697 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: 2s 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: ]] 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODM3MWExYzBhZmNhMDY4YmIxY2E1OTAwZWZkOTQyZjVjZTcxYWE4OThmYWIwZWM3ve4lJA==: 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:01.931 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:03.855 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.855 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:03.855 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.855 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.855 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.855 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.855 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.855 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.428 nvme0n1 00:23:04.428 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:04.428 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.428 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.428 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.428 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:04.428 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:05.000 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:05.261 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:05.261 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:05.261 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:05.523 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:05.783 request: 00:23:05.783 { 00:23:05.783 "name": "nvme0", 00:23:05.783 "dhchap_key": "key1", 00:23:05.783 "dhchap_ctrlr_key": "key3", 00:23:05.783 "method": "bdev_nvme_set_keys", 00:23:05.783 "req_id": 1 00:23:05.783 } 00:23:05.783 Got JSON-RPC error response 00:23:05.783 response: 00:23:05.783 { 00:23:05.783 "code": -13, 00:23:05.783 "message": "Permission denied" 00:23:05.783 } 00:23:05.783 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:05.783 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.783 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.783 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:06.045 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:06.045 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.045 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:06.045 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:06.045 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:06.988 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:06.988 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:06.988 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.249 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:07.250 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:07.250 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.250 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.250 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.250 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:07.250 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:07.250 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:08.237 nvme0n1 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:08.237 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:08.498 request: 00:23:08.498 { 00:23:08.498 "name": "nvme0", 00:23:08.498 "dhchap_key": "key2", 00:23:08.498 "dhchap_ctrlr_key": "key0", 00:23:08.498 "method": "bdev_nvme_set_keys", 00:23:08.498 "req_id": 1 00:23:08.498 } 00:23:08.498 Got JSON-RPC error response 00:23:08.498 response: 00:23:08.498 { 00:23:08.498 "code": -13, 00:23:08.498 "message": "Permission denied" 00:23:08.498 } 00:23:08.498 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:08.498 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.498 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.498 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.498 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:08.498 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:08.498 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.758 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:08.758 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:09.704 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:09.704 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:09.704 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3136921 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3136921 ']' 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3136921 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.964 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3136921 00:23:09.965 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.965 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.965 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3136921' 00:23:09.965 killing process with pid 3136921 00:23:09.965 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3136921 00:23:09.965 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3136921 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.226 rmmod nvme_tcp 00:23:10.226 rmmod nvme_fabrics 00:23:10.226 rmmod nvme_keyring 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 3162977 ']' 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 3162977 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3162977 ']' 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3162977 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162977 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162977' 00:23:10.226 killing process with pid 3162977 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3162977 00:23:10.226 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3162977 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.486 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XZ5 /tmp/spdk.key-sha256.gp8 /tmp/spdk.key-sha384.g40 /tmp/spdk.key-sha512.2Es /tmp/spdk.key-sha512.WlP /tmp/spdk.key-sha384.PAK /tmp/spdk.key-sha256.uYK '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:12.406 00:23:12.406 real 2m37.498s 00:23:12.406 user 5m53.601s 00:23:12.406 sys 0m24.809s 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.406 ************************************ 00:23:12.406 END TEST nvmf_auth_target 00:23:12.406 ************************************ 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.406 15:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:12.667 ************************************ 00:23:12.667 START TEST nvmf_bdevio_no_huge 00:23:12.667 ************************************ 00:23:12.667 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:12.667 * Looking for test storage... 00:23:12.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:12.667 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:12.667 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:12.667 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.667 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.668 --rc genhtml_branch_coverage=1 00:23:12.668 --rc genhtml_function_coverage=1 00:23:12.668 --rc genhtml_legend=1 00:23:12.668 --rc geninfo_all_blocks=1 00:23:12.668 --rc geninfo_unexecuted_blocks=1 00:23:12.668 00:23:12.668 ' 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.668 --rc genhtml_branch_coverage=1 00:23:12.668 --rc genhtml_function_coverage=1 00:23:12.668 --rc genhtml_legend=1 00:23:12.668 --rc geninfo_all_blocks=1 00:23:12.668 --rc geninfo_unexecuted_blocks=1 00:23:12.668 00:23:12.668 ' 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.668 --rc genhtml_branch_coverage=1 00:23:12.668 --rc genhtml_function_coverage=1 00:23:12.668 --rc genhtml_legend=1 00:23:12.668 --rc geninfo_all_blocks=1 00:23:12.668 --rc geninfo_unexecuted_blocks=1 00:23:12.668 00:23:12.668 ' 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.668 --rc genhtml_branch_coverage=1 00:23:12.668 --rc genhtml_function_coverage=1 00:23:12.668 --rc genhtml_legend=1 00:23:12.668 --rc geninfo_all_blocks=1 00:23:12.668 --rc geninfo_unexecuted_blocks=1 00:23:12.668 00:23:12.668 ' 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.668 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:12.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:12.669 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:20.816 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:20.816 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:20.816 Found net devices under 0000:31:00.0: cvl_0_0 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.816 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:20.817 Found net devices under 0000:31:00.1: cvl_0_1 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:23:20.817 00:23:20.817 --- 10.0.0.2 ping statistics --- 00:23:20.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.817 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:23:20.817 00:23:20.817 --- 10.0.0.1 ping statistics --- 00:23:20.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.817 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=3171296 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 3171296 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3171296 ']' 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.817 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:20.817 [2024-10-01 15:40:59.870234] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:20.817 [2024-10-01 15:40:59.870303] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:20.817 [2024-10-01 15:40:59.927826] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:20.817 [2024-10-01 15:40:59.965736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.817 [2024-10-01 15:41:00.052808] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.817 [2024-10-01 15:41:00.052869] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.817 [2024-10-01 15:41:00.052878] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.817 [2024-10-01 15:41:00.052885] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.817 [2024-10-01 15:41:00.052892] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.817 [2024-10-01 15:41:00.053089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:20.817 [2024-10-01 15:41:00.053225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:23:20.817 [2024-10-01 15:41:00.053386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.817 [2024-10-01 15:41:00.053386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.390 [2024-10-01 15:41:00.787103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.390 Malloc0 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.390 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.390 [2024-10-01 15:41:00.841159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:21.651 { 00:23:21.651 "params": { 00:23:21.651 "name": "Nvme$subsystem", 00:23:21.651 "trtype": "$TEST_TRANSPORT", 00:23:21.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.651 "adrfam": "ipv4", 00:23:21.651 "trsvcid": "$NVMF_PORT", 00:23:21.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.651 "hdgst": ${hdgst:-false}, 00:23:21.651 "ddgst": ${ddgst:-false} 00:23:21.651 }, 00:23:21.651 "method": "bdev_nvme_attach_controller" 00:23:21.651 } 00:23:21.651 EOF 00:23:21.651 )") 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:23:21.651 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:21.651 "params": { 00:23:21.651 "name": "Nvme1", 00:23:21.651 "trtype": "tcp", 00:23:21.651 "traddr": "10.0.0.2", 00:23:21.651 "adrfam": "ipv4", 00:23:21.651 "trsvcid": "4420", 00:23:21.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.651 "hdgst": false, 00:23:21.651 "ddgst": false 00:23:21.651 }, 00:23:21.652 "method": "bdev_nvme_attach_controller" 00:23:21.652 }' 00:23:21.652 [2024-10-01 15:41:00.898958] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:21.652 [2024-10-01 15:41:00.899030] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3171451 ] 00:23:21.652 [2024-10-01 15:41:00.945090] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:21.652 [2024-10-01 15:41:00.983069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:21.652 [2024-10-01 15:41:01.063534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.652 [2024-10-01 15:41:01.063696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.652 [2024-10-01 15:41:01.063696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.913 I/O targets: 00:23:21.913 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:21.913 00:23:21.913 00:23:21.913 CUnit - A unit testing framework for C - Version 2.1-3 00:23:21.913 http://cunit.sourceforge.net/ 00:23:21.913 00:23:21.913 00:23:21.913 Suite: bdevio tests on: Nvme1n1 00:23:21.913 Test: blockdev write read block ...passed 00:23:22.176 Test: blockdev write zeroes read block ...passed 00:23:22.176 Test: blockdev write zeroes read no split ...passed 00:23:22.176 Test: blockdev write zeroes read split ...passed 00:23:22.176 Test: blockdev write zeroes read split partial ...passed 00:23:22.176 Test: blockdev reset ...[2024-10-01 15:41:01.413886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.176 [2024-10-01 15:41:01.413996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ee4b0 (9): Bad file descriptor 00:23:22.176 [2024-10-01 15:41:01.470247] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:22.176 passed 00:23:22.176 Test: blockdev write read 8 blocks ...passed 00:23:22.176 Test: blockdev write read size > 128k ...passed 00:23:22.176 Test: blockdev write read invalid size ...passed 00:23:22.176 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:22.176 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:22.176 Test: blockdev write read max offset ...passed 00:23:22.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:22.438 Test: blockdev writev readv 8 blocks ...passed 00:23:22.438 Test: blockdev writev readv 30 x 1block ...passed 00:23:22.438 Test: blockdev writev readv block ...passed 00:23:22.438 Test: blockdev writev readv size > 128k ...passed 00:23:22.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:22.438 Test: blockdev comparev and writev ...[2024-10-01 15:41:01.779277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.779334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.779352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.779361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.779947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.779966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.779981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.779996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.780542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.780553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.780569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.780578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.781138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.781150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.781165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.438 [2024-10-01 15:41:01.781175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:22.438 passed 00:23:22.438 Test: blockdev nvme passthru rw ...passed 00:23:22.438 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:41:01.865820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.438 [2024-10-01 15:41:01.865839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.866223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.438 [2024-10-01 15:41:01.866236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.866619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.438 [2024-10-01 15:41:01.866630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:22.438 [2024-10-01 15:41:01.867031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.438 [2024-10-01 15:41:01.867044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:22.438 passed 00:23:22.438 Test: blockdev nvme admin passthru ...passed 00:23:22.699 Test: blockdev copy ...passed 00:23:22.699 00:23:22.699 Run Summary: Type Total Ran Passed Failed Inactive 00:23:22.699 suites 1 1 n/a 0 0 00:23:22.699 tests 23 23 23 0 0 00:23:22.699 asserts 152 152 152 0 n/a 00:23:22.699 00:23:22.699 Elapsed time = 1.294 seconds 00:23:22.960 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.960 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.960 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.961 rmmod nvme_tcp 00:23:22.961 rmmod nvme_fabrics 00:23:22.961 rmmod nvme_keyring 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 3171296 ']' 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 3171296 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3171296 ']' 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3171296 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.961 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3171296 00:23:23.222 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:23.222 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:23.222 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3171296' 00:23:23.222 killing process with pid 3171296 00:23:23.222 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3171296 00:23:23.222 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3171296 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.484 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.027 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.028 00:23:26.028 real 0m12.994s 00:23:26.028 user 0m15.200s 00:23:26.028 sys 0m6.951s 00:23:26.028 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.028 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:26.028 ************************************ 00:23:26.028 END TEST nvmf_bdevio_no_huge 00:23:26.028 ************************************ 00:23:26.028 15:41:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:26.028 15:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:26.028 15:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.028 15:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:26.028 ************************************ 00:23:26.028 START TEST nvmf_tls 00:23:26.028 ************************************ 00:23:26.028 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:26.028 * Looking for test storage... 00:23:26.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.028 --rc genhtml_branch_coverage=1 00:23:26.028 --rc genhtml_function_coverage=1 00:23:26.028 --rc genhtml_legend=1 00:23:26.028 --rc geninfo_all_blocks=1 00:23:26.028 --rc geninfo_unexecuted_blocks=1 00:23:26.028 00:23:26.028 ' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.028 --rc genhtml_branch_coverage=1 00:23:26.028 --rc genhtml_function_coverage=1 00:23:26.028 --rc genhtml_legend=1 00:23:26.028 --rc geninfo_all_blocks=1 00:23:26.028 --rc geninfo_unexecuted_blocks=1 00:23:26.028 00:23:26.028 ' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.028 --rc genhtml_branch_coverage=1 00:23:26.028 --rc genhtml_function_coverage=1 00:23:26.028 --rc genhtml_legend=1 00:23:26.028 --rc geninfo_all_blocks=1 00:23:26.028 --rc geninfo_unexecuted_blocks=1 00:23:26.028 00:23:26.028 ' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.028 --rc genhtml_branch_coverage=1 00:23:26.028 --rc genhtml_function_coverage=1 00:23:26.028 --rc genhtml_legend=1 00:23:26.028 --rc geninfo_all_blocks=1 00:23:26.028 --rc geninfo_unexecuted_blocks=1 00:23:26.028 00:23:26.028 ' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:26.028 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.029 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:34.171 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:34.171 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:34.171 Found net devices under 0000:31:00.0: cvl_0_0 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:34.171 Found net devices under 0000:31:00.1: cvl_0_1 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.171 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:34.172 00:23:34.172 --- 10.0.0.2 ping statistics --- 00:23:34.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.172 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:23:34.172 00:23:34.172 --- 10.0.0.1 ping statistics --- 00:23:34.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.172 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3176184 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3176184 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3176184 ']' 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.172 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.172 [2024-10-01 15:41:13.043971] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:34.172 [2024-10-01 15:41:13.044038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.172 [2024-10-01 15:41:13.090638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:34.172 [2024-10-01 15:41:13.139294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.172 [2024-10-01 15:41:13.184722] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.172 [2024-10-01 15:41:13.184779] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.172 [2024-10-01 15:41:13.184787] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.172 [2024-10-01 15:41:13.184795] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.172 [2024-10-01 15:41:13.184801] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.172 [2024-10-01 15:41:13.184826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.432 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.432 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:34.432 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:34.432 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.432 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.693 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.693 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:34.693 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:34.693 true 00:23:34.693 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:34.693 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:34.954 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:34.954 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:34.954 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:35.215 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:35.215 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:35.476 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:35.476 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:35.476 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:35.476 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:35.476 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:35.737 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:35.737 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:35.737 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:35.737 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:35.999 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:35.999 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:35.999 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:35.999 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:35.999 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:36.260 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:36.260 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:36.260 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:36.522 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:36.784 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.WomKZ6RnQp 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.J2Zpb2Eto1 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WomKZ6RnQp 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.J2Zpb2Eto1 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:36.784 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:37.045 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.WomKZ6RnQp 00:23:37.045 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WomKZ6RnQp 00:23:37.045 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.307 [2024-10-01 15:41:16.648116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.307 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.568 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.568 [2024-10-01 15:41:17.013028] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.568 [2024-10-01 15:41:17.013413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.829 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.829 malloc0 00:23:37.829 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.090 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WomKZ6RnQp 00:23:38.352 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.352 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WomKZ6RnQp 00:23:50.585 Initializing NVMe Controllers 00:23:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.585 Initialization complete. Launching workers. 00:23:50.585 ======================================================== 00:23:50.585 Latency(us) 00:23:50.585 Device Information : IOPS MiB/s Average min max 00:23:50.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18772.62 73.33 3409.41 1211.95 5212.23 00:23:50.585 ======================================================== 00:23:50.585 Total : 18772.62 73.33 3409.41 1211.95 5212.23 00:23:50.585 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WomKZ6RnQp 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WomKZ6RnQp 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3178945 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3178945 /var/tmp/bdevperf.sock 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3178945 ']' 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.585 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.586 [2024-10-01 15:41:27.936776] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:50.586 [2024-10-01 15:41:27.936837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178945 ] 00:23:50.586 [2024-10-01 15:41:27.967140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:50.586 [2024-10-01 15:41:28.013273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.586 [2024-10-01 15:41:28.044164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.586 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.586 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:50.586 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WomKZ6RnQp 00:23:50.586 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.586 [2024-10-01 15:41:29.057352] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.586 TLSTESTn1 00:23:50.586 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:50.586 Running I/O for 10 seconds... 00:24:00.084 4665.00 IOPS, 18.22 MiB/s 4886.00 IOPS, 19.09 MiB/s 5373.00 IOPS, 20.99 MiB/s 5398.00 IOPS, 21.09 MiB/s 5550.00 IOPS, 21.68 MiB/s 5430.67 IOPS, 21.21 MiB/s 5450.71 IOPS, 21.29 MiB/s 5258.25 IOPS, 20.54 MiB/s 5143.33 IOPS, 20.09 MiB/s 5174.50 IOPS, 20.21 MiB/s 00:24:00.084 Latency(us) 00:24:00.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.084 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:00.084 Verification LBA range: start 0x0 length 0x2000 00:24:00.084 TLSTESTn1 : 10.02 5178.48 20.23 0.00 0.00 24679.66 5761.71 34078.72 00:24:00.084 =================================================================================================================== 00:24:00.084 Total : 5178.48 20.23 0.00 0.00 24679.66 5761.71 34078.72 00:24:00.084 { 00:24:00.084 "results": [ 00:24:00.084 { 00:24:00.084 "job": "TLSTESTn1", 00:24:00.084 "core_mask": "0x4", 00:24:00.084 "workload": "verify", 00:24:00.084 "status": "finished", 00:24:00.084 "verify_range": { 00:24:00.084 "start": 0, 00:24:00.084 "length": 8192 00:24:00.084 }, 00:24:00.084 "queue_depth": 128, 00:24:00.084 "io_size": 4096, 00:24:00.084 "runtime": 10.016654, 00:24:00.084 "iops": 5178.4757664585395, 00:24:00.084 "mibps": 20.22842096272867, 00:24:00.084 "io_failed": 0, 00:24:00.084 "io_timeout": 0, 00:24:00.084 "avg_latency_us": 24679.66383682597, 00:24:00.084 "min_latency_us": 5761.706666666667, 00:24:00.084 "max_latency_us": 34078.72 00:24:00.084 } 00:24:00.084 ], 00:24:00.084 "core_count": 1 00:24:00.084 } 00:24:00.084 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.084 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3178945 00:24:00.084 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3178945 ']' 00:24:00.084 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3178945 00:24:00.084 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:00.084 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.084 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3178945 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3178945' 00:24:00.085 killing process with pid 3178945 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3178945 00:24:00.085 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.085 00:24:00.085 Latency(us) 00:24:00.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.085 =================================================================================================================== 00:24:00.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3178945 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J2Zpb2Eto1 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J2Zpb2Eto1 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J2Zpb2Eto1 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.J2Zpb2Eto1 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3181263 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3181263 /var/tmp/bdevperf.sock 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3181263 ']' 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.085 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.345 [2024-10-01 15:41:39.544301] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:00.345 [2024-10-01 15:41:39.544361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181263 ] 00:24:00.345 [2024-10-01 15:41:39.574664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:00.345 [2024-10-01 15:41:39.622576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.345 [2024-10-01 15:41:39.650686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.915 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.915 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:00.915 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.J2Zpb2Eto1 00:24:01.174 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.174 [2024-10-01 15:41:40.627624] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.435 [2024-10-01 15:41:40.637155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:01.435 [2024-10-01 15:41:40.637673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f5bc0 (107): Transport endpoint is not connected 00:24:01.435 [2024-10-01 15:41:40.638670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f5bc0 (9): Bad file descriptor 00:24:01.435 [2024-10-01 15:41:40.639671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.435 [2024-10-01 15:41:40.639680] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:01.435 [2024-10-01 15:41:40.639686] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:01.435 [2024-10-01 15:41:40.639694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.435 request: 00:24:01.435 { 00:24:01.435 "name": "TLSTEST", 00:24:01.435 "trtype": "tcp", 00:24:01.435 "traddr": "10.0.0.2", 00:24:01.435 "adrfam": "ipv4", 00:24:01.435 "trsvcid": "4420", 00:24:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.435 "prchk_reftag": false, 00:24:01.435 "prchk_guard": false, 00:24:01.435 "hdgst": false, 00:24:01.435 "ddgst": false, 00:24:01.435 "psk": "key0", 00:24:01.435 "allow_unrecognized_csi": false, 00:24:01.435 "method": "bdev_nvme_attach_controller", 00:24:01.435 "req_id": 1 00:24:01.435 } 00:24:01.435 Got JSON-RPC error response 00:24:01.435 response: 00:24:01.435 { 00:24:01.435 "code": -5, 00:24:01.435 "message": "Input/output error" 00:24:01.435 } 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3181263 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3181263 ']' 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3181263 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3181263 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3181263' 00:24:01.435 killing process with pid 3181263 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3181263 00:24:01.435 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.435 00:24:01.435 Latency(us) 00:24:01.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.435 =================================================================================================================== 00:24:01.435 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3181263 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WomKZ6RnQp 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WomKZ6RnQp 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:01.435 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WomKZ6RnQp 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WomKZ6RnQp 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3181601 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3181601 /var/tmp/bdevperf.sock 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3181601 ']' 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.436 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.436 [2024-10-01 15:41:40.879200] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:01.436 [2024-10-01 15:41:40.879260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181601 ] 00:24:01.696 [2024-10-01 15:41:40.909471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:01.696 [2024-10-01 15:41:40.957173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.697 [2024-10-01 15:41:40.984621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.266 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.266 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:02.266 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WomKZ6RnQp 00:24:02.527 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:02.788 [2024-10-01 15:41:41.981951] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.788 [2024-10-01 15:41:41.990347] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:02.788 [2024-10-01 15:41:41.990368] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:02.788 [2024-10-01 15:41:41.990389] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:02.788 [2024-10-01 15:41:41.991329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2541bc0 (107): Transport endpoint is not connected 00:24:02.788 [2024-10-01 15:41:41.992324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2541bc0 (9): Bad file descriptor 00:24:02.788 [2024-10-01 15:41:41.993326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.788 [2024-10-01 15:41:41.993333] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:02.788 [2024-10-01 15:41:41.993338] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:02.788 [2024-10-01 15:41:41.993346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.788 request: 00:24:02.788 { 00:24:02.788 "name": "TLSTEST", 00:24:02.788 "trtype": "tcp", 00:24:02.788 "traddr": "10.0.0.2", 00:24:02.788 "adrfam": "ipv4", 00:24:02.788 "trsvcid": "4420", 00:24:02.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.788 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:02.788 "prchk_reftag": false, 00:24:02.788 "prchk_guard": false, 00:24:02.788 "hdgst": false, 00:24:02.788 "ddgst": false, 00:24:02.788 "psk": "key0", 00:24:02.788 "allow_unrecognized_csi": false, 00:24:02.788 "method": "bdev_nvme_attach_controller", 00:24:02.788 "req_id": 1 00:24:02.788 } 00:24:02.788 Got JSON-RPC error response 00:24:02.788 response: 00:24:02.788 { 00:24:02.788 "code": -5, 00:24:02.788 "message": "Input/output error" 00:24:02.788 } 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3181601 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3181601 ']' 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3181601 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3181601 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3181601' 00:24:02.788 killing process with pid 3181601 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3181601 00:24:02.788 Received shutdown signal, test time was about 10.000000 seconds 00:24:02.788 00:24:02.788 Latency(us) 00:24:02.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.788 =================================================================================================================== 00:24:02.788 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3181601 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WomKZ6RnQp 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WomKZ6RnQp 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WomKZ6RnQp 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WomKZ6RnQp 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3181831 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3181831 /var/tmp/bdevperf.sock 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3181831 ']' 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.788 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.788 [2024-10-01 15:41:42.228707] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:02.788 [2024-10-01 15:41:42.228768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181831 ] 00:24:03.049 [2024-10-01 15:41:42.259271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:03.049 [2024-10-01 15:41:42.307722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.049 [2024-10-01 15:41:42.335792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.620 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.620 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:03.620 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WomKZ6RnQp 00:24:03.880 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:04.141 [2024-10-01 15:41:43.365050] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.141 [2024-10-01 15:41:43.373674] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:04.141 [2024-10-01 15:41:43.373693] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:04.141 [2024-10-01 15:41:43.373712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:04.141 [2024-10-01 15:41:43.374180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ddbc0 (107): Transport endpoint is not connected 00:24:04.141 [2024-10-01 15:41:43.375177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ddbc0 (9): Bad file descriptor 00:24:04.141 [2024-10-01 15:41:43.376179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:04.141 [2024-10-01 15:41:43.376185] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:04.141 [2024-10-01 15:41:43.376191] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:04.141 [2024-10-01 15:41:43.376199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:04.141 request: 00:24:04.141 { 00:24:04.141 "name": "TLSTEST", 00:24:04.141 "trtype": "tcp", 00:24:04.141 "traddr": "10.0.0.2", 00:24:04.141 "adrfam": "ipv4", 00:24:04.141 "trsvcid": "4420", 00:24:04.141 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:04.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.141 "prchk_reftag": false, 00:24:04.141 "prchk_guard": false, 00:24:04.141 "hdgst": false, 00:24:04.141 "ddgst": false, 00:24:04.141 "psk": "key0", 00:24:04.141 "allow_unrecognized_csi": false, 00:24:04.141 "method": "bdev_nvme_attach_controller", 00:24:04.141 "req_id": 1 00:24:04.141 } 00:24:04.141 Got JSON-RPC error response 00:24:04.141 response: 00:24:04.141 { 00:24:04.141 "code": -5, 00:24:04.141 "message": "Input/output error" 00:24:04.141 } 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3181831 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3181831 ']' 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3181831 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3181831 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3181831' 00:24:04.141 killing process with pid 3181831 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3181831 00:24:04.141 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.141 00:24:04.141 Latency(us) 00:24:04.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.141 =================================================================================================================== 00:24:04.141 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3181831 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3182000 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3182000 /var/tmp/bdevperf.sock 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3182000 ']' 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.141 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.401 [2024-10-01 15:41:43.621053] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:04.401 [2024-10-01 15:41:43.621114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182000 ] 00:24:04.401 [2024-10-01 15:41:43.651586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:04.401 [2024-10-01 15:41:43.698432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.401 [2024-10-01 15:41:43.726592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.997 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.997 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:04.997 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:05.257 [2024-10-01 15:41:44.555370] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:05.257 [2024-10-01 15:41:44.555392] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:05.257 request: 00:24:05.257 { 00:24:05.257 "name": "key0", 00:24:05.257 "path": "", 00:24:05.257 "method": "keyring_file_add_key", 00:24:05.257 "req_id": 1 00:24:05.257 } 00:24:05.257 Got JSON-RPC error response 00:24:05.257 response: 00:24:05.257 { 00:24:05.257 "code": -1, 00:24:05.257 "message": "Operation not permitted" 00:24:05.257 } 00:24:05.257 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.257 [2024-10-01 15:41:44.707826] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.257 [2024-10-01 15:41:44.707850] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:05.517 request: 00:24:05.517 { 00:24:05.517 "name": "TLSTEST", 00:24:05.517 "trtype": "tcp", 00:24:05.517 "traddr": "10.0.0.2", 00:24:05.517 "adrfam": "ipv4", 00:24:05.517 "trsvcid": "4420", 00:24:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.517 "prchk_reftag": false, 00:24:05.517 "prchk_guard": false, 00:24:05.517 "hdgst": false, 00:24:05.517 "ddgst": false, 00:24:05.517 "psk": "key0", 00:24:05.517 "allow_unrecognized_csi": false, 00:24:05.517 "method": "bdev_nvme_attach_controller", 00:24:05.517 "req_id": 1 00:24:05.517 } 00:24:05.517 Got JSON-RPC error response 00:24:05.517 response: 00:24:05.517 { 00:24:05.517 "code": -126, 00:24:05.517 "message": "Required key not available" 00:24:05.517 } 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3182000 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3182000 ']' 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3182000 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3182000 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3182000' 00:24:05.517 killing process with pid 3182000 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3182000 00:24:05.517 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.517 00:24:05.517 Latency(us) 00:24:05.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.517 =================================================================================================================== 00:24:05.517 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3182000 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:05.517 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3176184 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3176184 ']' 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3176184 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3176184 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3176184' 00:24:05.518 killing process with pid 3176184 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3176184 00:24:05.518 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3176184 00:24:05.777 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:05.777 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ULqIJsJpZD 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ULqIJsJpZD 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3182323 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3182323 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3182323 ']' 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.778 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.778 [2024-10-01 15:41:45.192665] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:05.778 [2024-10-01 15:41:45.192737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.778 [2024-10-01 15:41:45.230973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:06.037 [2024-10-01 15:41:45.280270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.037 [2024-10-01 15:41:45.310071] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.037 [2024-10-01 15:41:45.310108] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.038 [2024-10-01 15:41:45.310114] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.038 [2024-10-01 15:41:45.310119] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.038 [2024-10-01 15:41:45.310124] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.038 [2024-10-01 15:41:45.310141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.607 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.607 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:06.607 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:06.607 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.607 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.607 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.607 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ULqIJsJpZD 00:24:06.607 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ULqIJsJpZD 00:24:06.607 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:06.867 [2024-10-01 15:41:46.186548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.867 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:07.127 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:07.127 [2024-10-01 15:41:46.547436] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.127 [2024-10-01 15:41:46.547630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.127 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:07.386 malloc0 00:24:07.386 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:07.646 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ULqIJsJpZD 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ULqIJsJpZD 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3182857 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3182857 /var/tmp/bdevperf.sock 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3182857 ']' 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.906 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.166 [2024-10-01 15:41:47.369552] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:08.166 [2024-10-01 15:41:47.369607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182857 ] 00:24:08.166 [2024-10-01 15:41:47.400207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:08.166 [2024-10-01 15:41:47.448148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.166 [2024-10-01 15:41:47.476233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.736 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.736 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:08.736 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:08.997 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:09.257 [2024-10-01 15:41:48.505670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.257 TLSTESTn1 00:24:09.257 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:09.257 Running I/O for 10 seconds... 00:24:19.625 5035.00 IOPS, 19.67 MiB/s 5542.00 IOPS, 21.65 MiB/s 5777.33 IOPS, 22.57 MiB/s 5680.25 IOPS, 22.19 MiB/s 5678.60 IOPS, 22.18 MiB/s 5763.33 IOPS, 22.51 MiB/s 5789.57 IOPS, 22.62 MiB/s 5874.50 IOPS, 22.95 MiB/s 5894.00 IOPS, 23.02 MiB/s 5931.40 IOPS, 23.17 MiB/s 00:24:19.625 Latency(us) 00:24:19.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:19.625 Verification LBA range: start 0x0 length 0x2000 00:24:19.625 TLSTESTn1 : 10.01 5937.07 23.19 0.00 0.00 21529.89 4942.51 22173.01 00:24:19.625 =================================================================================================================== 00:24:19.625 Total : 5937.07 23.19 0.00 0.00 21529.89 4942.51 22173.01 00:24:19.625 { 00:24:19.625 "results": [ 00:24:19.625 { 00:24:19.625 "job": "TLSTESTn1", 00:24:19.625 "core_mask": "0x4", 00:24:19.625 "workload": "verify", 00:24:19.625 "status": "finished", 00:24:19.625 "verify_range": { 00:24:19.625 "start": 0, 00:24:19.625 "length": 8192 00:24:19.625 }, 00:24:19.625 "queue_depth": 128, 00:24:19.625 "io_size": 4096, 00:24:19.625 "runtime": 10.011844, 00:24:19.625 "iops": 5937.068136499131, 00:24:19.625 "mibps": 23.19167240819973, 00:24:19.625 "io_failed": 0, 00:24:19.625 "io_timeout": 0, 00:24:19.625 "avg_latency_us": 21529.890363890245, 00:24:19.625 "min_latency_us": 4942.506666666667, 00:24:19.625 "max_latency_us": 22173.013333333332 00:24:19.625 } 00:24:19.625 ], 00:24:19.625 "core_count": 1 00:24:19.625 } 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3182857 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3182857 ']' 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3182857 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3182857 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3182857' 00:24:19.625 killing process with pid 3182857 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3182857 00:24:19.625 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.625 00:24:19.625 Latency(us) 00:24:19.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.625 =================================================================================================================== 00:24:19.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3182857 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ULqIJsJpZD 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ULqIJsJpZD 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ULqIJsJpZD 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ULqIJsJpZD 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ULqIJsJpZD 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3185033 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3185033 /var/tmp/bdevperf.sock 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3185033 ']' 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.625 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.625 [2024-10-01 15:41:58.963555] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:19.625 [2024-10-01 15:41:58.963613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3185033 ] 00:24:19.625 [2024-10-01 15:41:58.993938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:19.625 [2024-10-01 15:41:59.041654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.625 [2024-10-01 15:41:59.067628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.884 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.884 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:19.884 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:19.885 [2024-10-01 15:41:59.299086] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ULqIJsJpZD': 0100666 00:24:19.885 [2024-10-01 15:41:59.299115] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:19.885 request: 00:24:19.885 { 00:24:19.885 "name": "key0", 00:24:19.885 "path": "/tmp/tmp.ULqIJsJpZD", 00:24:19.885 "method": "keyring_file_add_key", 00:24:19.885 "req_id": 1 00:24:19.885 } 00:24:19.885 Got JSON-RPC error response 00:24:19.885 response: 00:24:19.885 { 00:24:19.885 "code": -1, 00:24:19.885 "message": "Operation not permitted" 00:24:19.885 } 00:24:19.885 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:20.146 [2024-10-01 15:41:59.483617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.146 [2024-10-01 15:41:59.483638] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:20.146 request: 00:24:20.146 { 00:24:20.146 "name": "TLSTEST", 00:24:20.146 "trtype": "tcp", 00:24:20.146 "traddr": "10.0.0.2", 00:24:20.146 "adrfam": "ipv4", 00:24:20.146 "trsvcid": "4420", 00:24:20.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:20.146 "prchk_reftag": false, 00:24:20.146 "prchk_guard": false, 00:24:20.146 "hdgst": false, 00:24:20.146 "ddgst": false, 00:24:20.146 "psk": "key0", 00:24:20.146 "allow_unrecognized_csi": false, 00:24:20.146 "method": "bdev_nvme_attach_controller", 00:24:20.146 "req_id": 1 00:24:20.146 } 00:24:20.146 Got JSON-RPC error response 00:24:20.146 response: 00:24:20.146 { 00:24:20.146 "code": -126, 00:24:20.146 "message": "Required key not available" 00:24:20.146 } 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3185033 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3185033 ']' 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3185033 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3185033 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3185033' 00:24:20.146 killing process with pid 3185033 00:24:20.146 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3185033 00:24:20.147 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.147 00:24:20.147 Latency(us) 00:24:20.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.147 =================================================================================================================== 00:24:20.147 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:20.147 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3185033 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3182323 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3182323 ']' 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3182323 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3182323 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3182323' 00:24:20.409 killing process with pid 3182323 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3182323 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3182323 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:20.409 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3185375 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3185375 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3185375 ']' 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.670 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.670 [2024-10-01 15:41:59.932774] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:20.670 [2024-10-01 15:41:59.932834] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.670 [2024-10-01 15:41:59.970265] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:20.670 [2024-10-01 15:42:00.016401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.670 [2024-10-01 15:42:00.047937] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.670 [2024-10-01 15:42:00.047972] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.670 [2024-10-01 15:42:00.047978] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.670 [2024-10-01 15:42:00.047983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.670 [2024-10-01 15:42:00.047990] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.670 [2024-10-01 15:42:00.048014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ULqIJsJpZD 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ULqIJsJpZD 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ULqIJsJpZD 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ULqIJsJpZD 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.613 [2024-10-01 15:42:00.903407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.613 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:21.874 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:21.874 [2024-10-01 15:42:01.220185] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.874 [2024-10-01 15:42:01.220381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.874 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.135 malloc0 00:24:22.135 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:22.135 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:22.396 [2024-10-01 15:42:01.733008] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ULqIJsJpZD': 0100666 00:24:22.396 [2024-10-01 15:42:01.733032] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:22.396 request: 00:24:22.396 { 00:24:22.396 "name": "key0", 00:24:22.396 "path": "/tmp/tmp.ULqIJsJpZD", 00:24:22.396 "method": "keyring_file_add_key", 00:24:22.396 "req_id": 1 00:24:22.396 } 00:24:22.396 Got JSON-RPC error response 00:24:22.396 response: 00:24:22.396 { 00:24:22.396 "code": -1, 00:24:22.396 "message": "Operation not permitted" 00:24:22.396 } 00:24:22.396 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.657 [2024-10-01 15:42:01.897432] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:22.657 [2024-10-01 15:42:01.897461] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:22.657 request: 00:24:22.657 { 00:24:22.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.657 "host": "nqn.2016-06.io.spdk:host1", 00:24:22.657 "psk": "key0", 00:24:22.657 "method": "nvmf_subsystem_add_host", 00:24:22.657 "req_id": 1 00:24:22.657 } 00:24:22.657 Got JSON-RPC error response 00:24:22.657 response: 00:24:22.657 { 00:24:22.657 "code": -32603, 00:24:22.657 "message": "Internal error" 00:24:22.657 } 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3185375 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3185375 ']' 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3185375 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3185375 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3185375' 00:24:22.657 killing process with pid 3185375 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3185375 00:24:22.657 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3185375 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ULqIJsJpZD 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3185856 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3185856 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3185856 ']' 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.657 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.918 [2024-10-01 15:42:02.165541] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:22.918 [2024-10-01 15:42:02.165603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.918 [2024-10-01 15:42:02.202682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:22.918 [2024-10-01 15:42:02.250831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.918 [2024-10-01 15:42:02.279571] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.918 [2024-10-01 15:42:02.279604] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.918 [2024-10-01 15:42:02.279610] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.918 [2024-10-01 15:42:02.279615] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.918 [2024-10-01 15:42:02.279619] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.918 [2024-10-01 15:42:02.279637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.491 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.491 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:23.491 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:23.491 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.491 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.753 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.753 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ULqIJsJpZD 00:24:23.753 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ULqIJsJpZD 00:24:23.753 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:23.753 [2024-10-01 15:42:03.134090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.753 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:24.014 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:24.014 [2024-10-01 15:42:03.450860] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.014 [2024-10-01 15:42:03.451066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.014 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:24.275 malloc0 00:24:24.275 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:24.536 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:24.536 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:24.797 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3186217 00:24:24.797 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3186217 /var/tmp/bdevperf.sock 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3186217 ']' 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.798 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.798 [2024-10-01 15:42:04.192238] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:24.798 [2024-10-01 15:42:04.192292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186217 ] 00:24:24.798 [2024-10-01 15:42:04.222327] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:25.059 [2024-10-01 15:42:04.268915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.059 [2024-10-01 15:42:04.297047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.059 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.059 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:25.059 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:25.322 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.322 [2024-10-01 15:42:04.692854] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.322 TLSTESTn1 00:24:25.584 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:25.846 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:25.846 "subsystems": [ 00:24:25.846 { 00:24:25.846 "subsystem": "keyring", 00:24:25.846 "config": [ 00:24:25.846 { 00:24:25.846 "method": "keyring_file_add_key", 00:24:25.846 "params": { 00:24:25.846 "name": "key0", 00:24:25.846 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:25.846 } 00:24:25.846 } 00:24:25.846 ] 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "subsystem": "iobuf", 00:24:25.846 "config": [ 00:24:25.846 { 00:24:25.846 "method": "iobuf_set_options", 00:24:25.846 "params": { 00:24:25.846 "small_pool_count": 8192, 00:24:25.846 "large_pool_count": 1024, 00:24:25.846 "small_bufsize": 8192, 00:24:25.846 "large_bufsize": 135168 00:24:25.846 } 00:24:25.846 } 00:24:25.846 ] 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "subsystem": "sock", 00:24:25.846 "config": [ 00:24:25.846 { 00:24:25.846 "method": "sock_set_default_impl", 00:24:25.846 "params": { 00:24:25.846 "impl_name": "posix" 00:24:25.846 } 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "method": "sock_impl_set_options", 00:24:25.846 "params": { 00:24:25.846 "impl_name": "ssl", 00:24:25.846 "recv_buf_size": 4096, 00:24:25.846 "send_buf_size": 4096, 00:24:25.846 "enable_recv_pipe": true, 00:24:25.846 "enable_quickack": false, 00:24:25.846 "enable_placement_id": 0, 00:24:25.846 "enable_zerocopy_send_server": true, 00:24:25.846 "enable_zerocopy_send_client": false, 00:24:25.846 "zerocopy_threshold": 0, 00:24:25.846 "tls_version": 0, 00:24:25.846 "enable_ktls": false 00:24:25.846 } 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "method": "sock_impl_set_options", 00:24:25.846 "params": { 00:24:25.846 "impl_name": "posix", 00:24:25.846 "recv_buf_size": 2097152, 00:24:25.846 "send_buf_size": 2097152, 00:24:25.846 "enable_recv_pipe": true, 00:24:25.846 "enable_quickack": false, 00:24:25.846 "enable_placement_id": 0, 00:24:25.846 "enable_zerocopy_send_server": true, 00:24:25.846 "enable_zerocopy_send_client": false, 00:24:25.846 "zerocopy_threshold": 0, 00:24:25.846 "tls_version": 0, 00:24:25.846 "enable_ktls": false 00:24:25.846 } 00:24:25.846 } 00:24:25.846 ] 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "subsystem": "vmd", 00:24:25.846 "config": [] 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "subsystem": "accel", 00:24:25.846 "config": [ 00:24:25.846 { 00:24:25.846 "method": "accel_set_options", 00:24:25.846 "params": { 00:24:25.846 "small_cache_size": 128, 00:24:25.846 "large_cache_size": 16, 00:24:25.846 "task_count": 2048, 00:24:25.846 "sequence_count": 2048, 00:24:25.846 "buf_count": 2048 00:24:25.846 } 00:24:25.846 } 00:24:25.846 ] 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "subsystem": "bdev", 00:24:25.846 "config": [ 00:24:25.846 { 00:24:25.846 "method": "bdev_set_options", 00:24:25.846 "params": { 00:24:25.846 "bdev_io_pool_size": 65535, 00:24:25.846 "bdev_io_cache_size": 256, 00:24:25.846 "bdev_auto_examine": true, 00:24:25.846 "iobuf_small_cache_size": 128, 00:24:25.846 "iobuf_large_cache_size": 16 00:24:25.846 } 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "method": "bdev_raid_set_options", 00:24:25.846 "params": { 00:24:25.846 "process_window_size_kb": 1024, 00:24:25.846 "process_max_bandwidth_mb_sec": 0 00:24:25.846 } 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "method": "bdev_iscsi_set_options", 00:24:25.846 "params": { 00:24:25.846 "timeout_sec": 30 00:24:25.846 } 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "method": "bdev_nvme_set_options", 00:24:25.846 "params": { 00:24:25.846 "action_on_timeout": "none", 00:24:25.846 "timeout_us": 0, 00:24:25.846 "timeout_admin_us": 0, 00:24:25.846 "keep_alive_timeout_ms": 10000, 00:24:25.846 "arbitration_burst": 0, 00:24:25.846 "low_priority_weight": 0, 00:24:25.846 "medium_priority_weight": 0, 00:24:25.846 "high_priority_weight": 0, 00:24:25.846 "nvme_adminq_poll_period_us": 10000, 00:24:25.846 "nvme_ioq_poll_period_us": 0, 00:24:25.846 "io_queue_requests": 0, 00:24:25.846 "delay_cmd_submit": true, 00:24:25.846 "transport_retry_count": 4, 00:24:25.846 "bdev_retry_count": 3, 00:24:25.846 "transport_ack_timeout": 0, 00:24:25.846 "ctrlr_loss_timeout_sec": 0, 00:24:25.846 "reconnect_delay_sec": 0, 00:24:25.846 "fast_io_fail_timeout_sec": 0, 00:24:25.846 "disable_auto_failback": false, 00:24:25.846 "generate_uuids": false, 00:24:25.846 "transport_tos": 0, 00:24:25.846 "nvme_error_stat": false, 00:24:25.846 "rdma_srq_size": 0, 00:24:25.846 "io_path_stat": false, 00:24:25.846 "allow_accel_sequence": false, 00:24:25.846 "rdma_max_cq_size": 0, 00:24:25.846 "rdma_cm_event_timeout_ms": 0, 00:24:25.846 "dhchap_digests": [ 00:24:25.846 "sha256", 00:24:25.846 "sha384", 00:24:25.846 "sha512" 00:24:25.846 ], 00:24:25.846 "dhchap_dhgroups": [ 00:24:25.846 "null", 00:24:25.846 "ffdhe2048", 00:24:25.846 "ffdhe3072", 00:24:25.846 "ffdhe4096", 00:24:25.846 "ffdhe6144", 00:24:25.846 "ffdhe8192" 00:24:25.846 ] 00:24:25.846 } 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "method": "bdev_nvme_set_hotplug", 00:24:25.846 "params": { 00:24:25.846 "period_us": 100000, 00:24:25.846 "enable": false 00:24:25.846 } 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "method": "bdev_malloc_create", 00:24:25.846 "params": { 00:24:25.846 "name": "malloc0", 00:24:25.846 "num_blocks": 8192, 00:24:25.847 "block_size": 4096, 00:24:25.847 "physical_block_size": 4096, 00:24:25.847 "uuid": "77b64a46-fe16-4c23-a23e-96f2f480b19a", 00:24:25.847 "optimal_io_boundary": 0, 00:24:25.847 "md_size": 0, 00:24:25.847 "dif_type": 0, 00:24:25.847 "dif_is_head_of_md": false, 00:24:25.847 "dif_pi_format": 0 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "bdev_wait_for_examine" 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "nbd", 00:24:25.847 "config": [] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "scheduler", 00:24:25.847 "config": [ 00:24:25.847 { 00:24:25.847 "method": "framework_set_scheduler", 00:24:25.847 "params": { 00:24:25.847 "name": "static" 00:24:25.847 } 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "nvmf", 00:24:25.847 "config": [ 00:24:25.847 { 00:24:25.847 "method": "nvmf_set_config", 00:24:25.847 "params": { 00:24:25.847 "discovery_filter": "match_any", 00:24:25.847 "admin_cmd_passthru": { 00:24:25.847 "identify_ctrlr": false 00:24:25.847 }, 00:24:25.847 "dhchap_digests": [ 00:24:25.847 "sha256", 00:24:25.847 "sha384", 00:24:25.847 "sha512" 00:24:25.847 ], 00:24:25.847 "dhchap_dhgroups": [ 00:24:25.847 "null", 00:24:25.847 "ffdhe2048", 00:24:25.847 "ffdhe3072", 00:24:25.847 "ffdhe4096", 00:24:25.847 "ffdhe6144", 00:24:25.847 "ffdhe8192" 00:24:25.847 ] 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "nvmf_set_max_subsystems", 00:24:25.847 "params": { 00:24:25.847 "max_subsystems": 1024 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "nvmf_set_crdt", 00:24:25.847 "params": { 00:24:25.847 "crdt1": 0, 00:24:25.847 "crdt2": 0, 00:24:25.847 "crdt3": 0 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "nvmf_create_transport", 00:24:25.847 "params": { 00:24:25.847 "trtype": "TCP", 00:24:25.847 "max_queue_depth": 128, 00:24:25.847 "max_io_qpairs_per_ctrlr": 127, 00:24:25.847 "in_capsule_data_size": 4096, 00:24:25.847 "max_io_size": 131072, 00:24:25.847 "io_unit_size": 131072, 00:24:25.847 "max_aq_depth": 128, 00:24:25.847 "num_shared_buffers": 511, 00:24:25.847 "buf_cache_size": 4294967295, 00:24:25.847 "dif_insert_or_strip": false, 00:24:25.847 "zcopy": false, 00:24:25.847 "c2h_success": false, 00:24:25.847 "sock_priority": 0, 00:24:25.847 "abort_timeout_sec": 1, 00:24:25.847 "ack_timeout": 0, 00:24:25.847 "data_wr_pool_size": 0 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "nvmf_create_subsystem", 00:24:25.847 "params": { 00:24:25.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.847 "allow_any_host": false, 00:24:25.847 "serial_number": "SPDK00000000000001", 00:24:25.847 "model_number": "SPDK bdev Controller", 00:24:25.847 "max_namespaces": 10, 00:24:25.847 "min_cntlid": 1, 00:24:25.847 "max_cntlid": 65519, 00:24:25.847 "ana_reporting": false 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "nvmf_subsystem_add_host", 00:24:25.847 "params": { 00:24:25.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.847 "host": "nqn.2016-06.io.spdk:host1", 00:24:25.847 "psk": "key0" 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "nvmf_subsystem_add_ns", 00:24:25.847 "params": { 00:24:25.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.847 "namespace": { 00:24:25.847 "nsid": 1, 00:24:25.847 "bdev_name": "malloc0", 00:24:25.847 "nguid": "77B64A46FE164C23A23E96F2F480B19A", 00:24:25.847 "uuid": "77b64a46-fe16-4c23-a23e-96f2f480b19a", 00:24:25.847 "no_auto_visible": false 00:24:25.847 } 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "nvmf_subsystem_add_listener", 00:24:25.847 "params": { 00:24:25.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.847 "listen_address": { 00:24:25.847 "trtype": "TCP", 00:24:25.847 "adrfam": "IPv4", 00:24:25.847 "traddr": "10.0.0.2", 00:24:25.847 "trsvcid": "4420" 00:24:25.847 }, 00:24:25.847 "secure_channel": true 00:24:25.847 } 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 }' 00:24:25.847 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:25.847 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:25.847 "subsystems": [ 00:24:25.847 { 00:24:25.847 "subsystem": "keyring", 00:24:25.847 "config": [ 00:24:25.847 { 00:24:25.847 "method": "keyring_file_add_key", 00:24:25.847 "params": { 00:24:25.847 "name": "key0", 00:24:25.847 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:25.847 } 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "iobuf", 00:24:25.847 "config": [ 00:24:25.847 { 00:24:25.847 "method": "iobuf_set_options", 00:24:25.847 "params": { 00:24:25.847 "small_pool_count": 8192, 00:24:25.847 "large_pool_count": 1024, 00:24:25.847 "small_bufsize": 8192, 00:24:25.847 "large_bufsize": 135168 00:24:25.847 } 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "sock", 00:24:25.847 "config": [ 00:24:25.847 { 00:24:25.847 "method": "sock_set_default_impl", 00:24:25.847 "params": { 00:24:25.847 "impl_name": "posix" 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "sock_impl_set_options", 00:24:25.847 "params": { 00:24:25.847 "impl_name": "ssl", 00:24:25.847 "recv_buf_size": 4096, 00:24:25.847 "send_buf_size": 4096, 00:24:25.847 "enable_recv_pipe": true, 00:24:25.847 "enable_quickack": false, 00:24:25.847 "enable_placement_id": 0, 00:24:25.847 "enable_zerocopy_send_server": true, 00:24:25.847 "enable_zerocopy_send_client": false, 00:24:25.847 "zerocopy_threshold": 0, 00:24:25.847 "tls_version": 0, 00:24:25.847 "enable_ktls": false 00:24:25.847 } 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "method": "sock_impl_set_options", 00:24:25.847 "params": { 00:24:25.847 "impl_name": "posix", 00:24:25.847 "recv_buf_size": 2097152, 00:24:25.847 "send_buf_size": 2097152, 00:24:25.847 "enable_recv_pipe": true, 00:24:25.847 "enable_quickack": false, 00:24:25.847 "enable_placement_id": 0, 00:24:25.847 "enable_zerocopy_send_server": true, 00:24:25.847 "enable_zerocopy_send_client": false, 00:24:25.847 "zerocopy_threshold": 0, 00:24:25.847 "tls_version": 0, 00:24:25.847 "enable_ktls": false 00:24:25.847 } 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "vmd", 00:24:25.847 "config": [] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "accel", 00:24:25.847 "config": [ 00:24:25.847 { 00:24:25.847 "method": "accel_set_options", 00:24:25.847 "params": { 00:24:25.847 "small_cache_size": 128, 00:24:25.847 "large_cache_size": 16, 00:24:25.847 "task_count": 2048, 00:24:25.847 "sequence_count": 2048, 00:24:25.847 "buf_count": 2048 00:24:25.847 } 00:24:25.847 } 00:24:25.847 ] 00:24:25.847 }, 00:24:25.847 { 00:24:25.847 "subsystem": "bdev", 00:24:25.847 "config": [ 00:24:25.847 { 00:24:25.847 "method": "bdev_set_options", 00:24:25.848 "params": { 00:24:25.848 "bdev_io_pool_size": 65535, 00:24:25.848 "bdev_io_cache_size": 256, 00:24:25.848 "bdev_auto_examine": true, 00:24:25.848 "iobuf_small_cache_size": 128, 00:24:25.848 "iobuf_large_cache_size": 16 00:24:25.848 } 00:24:25.848 }, 00:24:25.848 { 00:24:25.848 "method": "bdev_raid_set_options", 00:24:25.848 "params": { 00:24:25.848 "process_window_size_kb": 1024, 00:24:25.848 "process_max_bandwidth_mb_sec": 0 00:24:25.848 } 00:24:25.848 }, 00:24:25.848 { 00:24:25.848 "method": "bdev_iscsi_set_options", 00:24:25.848 "params": { 00:24:25.848 "timeout_sec": 30 00:24:25.848 } 00:24:25.848 }, 00:24:25.848 { 00:24:25.848 "method": "bdev_nvme_set_options", 00:24:25.848 "params": { 00:24:25.848 "action_on_timeout": "none", 00:24:25.848 "timeout_us": 0, 00:24:25.848 "timeout_admin_us": 0, 00:24:25.848 "keep_alive_timeout_ms": 10000, 00:24:25.848 "arbitration_burst": 0, 00:24:25.848 "low_priority_weight": 0, 00:24:25.848 "medium_priority_weight": 0, 00:24:25.848 "high_priority_weight": 0, 00:24:25.848 "nvme_adminq_poll_period_us": 10000, 00:24:25.848 "nvme_ioq_poll_period_us": 0, 00:24:25.848 "io_queue_requests": 512, 00:24:25.848 "delay_cmd_submit": true, 00:24:25.848 "transport_retry_count": 4, 00:24:25.848 "bdev_retry_count": 3, 00:24:25.848 "transport_ack_timeout": 0, 00:24:25.848 "ctrlr_loss_timeout_sec": 0, 00:24:25.848 "reconnect_delay_sec": 0, 00:24:25.848 "fast_io_fail_timeout_sec": 0, 00:24:25.848 "disable_auto_failback": false, 00:24:25.848 "generate_uuids": false, 00:24:25.848 "transport_tos": 0, 00:24:25.848 "nvme_error_stat": false, 00:24:25.848 "rdma_srq_size": 0, 00:24:25.848 "io_path_stat": false, 00:24:25.848 "allow_accel_sequence": false, 00:24:25.848 "rdma_max_cq_size": 0, 00:24:25.848 "rdma_cm_event_timeout_ms": 0, 00:24:25.848 "dhchap_digests": [ 00:24:25.848 "sha256", 00:24:25.848 "sha384", 00:24:25.848 "sha512" 00:24:25.848 ], 00:24:25.848 "dhchap_dhgroups": [ 00:24:25.848 "null", 00:24:25.848 "ffdhe2048", 00:24:25.848 "ffdhe3072", 00:24:25.848 "ffdhe4096", 00:24:25.848 "ffdhe6144", 00:24:25.848 "ffdhe8192" 00:24:25.848 ] 00:24:25.848 } 00:24:25.848 }, 00:24:25.848 { 00:24:25.848 "method": "bdev_nvme_attach_controller", 00:24:25.848 "params": { 00:24:25.848 "name": "TLSTEST", 00:24:25.848 "trtype": "TCP", 00:24:25.848 "adrfam": "IPv4", 00:24:25.848 "traddr": "10.0.0.2", 00:24:25.848 "trsvcid": "4420", 00:24:25.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.848 "prchk_reftag": false, 00:24:25.848 "prchk_guard": false, 00:24:25.848 "ctrlr_loss_timeout_sec": 0, 00:24:25.848 "reconnect_delay_sec": 0, 00:24:25.848 "fast_io_fail_timeout_sec": 0, 00:24:25.848 "psk": "key0", 00:24:25.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.848 "hdgst": false, 00:24:25.848 "ddgst": false 00:24:25.848 } 00:24:25.848 }, 00:24:25.848 { 00:24:25.848 "method": "bdev_nvme_set_hotplug", 00:24:25.848 "params": { 00:24:25.848 "period_us": 100000, 00:24:25.848 "enable": false 00:24:25.848 } 00:24:25.848 }, 00:24:25.848 { 00:24:25.848 "method": "bdev_wait_for_examine" 00:24:25.848 } 00:24:25.848 ] 00:24:25.848 }, 00:24:25.848 { 00:24:25.848 "subsystem": "nbd", 00:24:25.848 "config": [] 00:24:25.848 } 00:24:25.848 ] 00:24:25.848 }' 00:24:25.848 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3186217 00:24:25.848 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3186217 ']' 00:24:25.848 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3186217 00:24:25.848 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:25.848 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.848 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3186217 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3186217' 00:24:26.109 killing process with pid 3186217 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3186217 00:24:26.109 Received shutdown signal, test time was about 10.000000 seconds 00:24:26.109 00:24:26.109 Latency(us) 00:24:26.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.109 =================================================================================================================== 00:24:26.109 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3186217 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3185856 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3185856 ']' 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3185856 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3185856 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3185856' 00:24:26.109 killing process with pid 3185856 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3185856 00:24:26.109 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3185856 00:24:26.376 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:26.376 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:26.376 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.376 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.376 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:26.376 "subsystems": [ 00:24:26.376 { 00:24:26.376 "subsystem": "keyring", 00:24:26.376 "config": [ 00:24:26.376 { 00:24:26.376 "method": "keyring_file_add_key", 00:24:26.376 "params": { 00:24:26.376 "name": "key0", 00:24:26.376 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:26.376 } 00:24:26.376 } 00:24:26.376 ] 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "subsystem": "iobuf", 00:24:26.376 "config": [ 00:24:26.376 { 00:24:26.376 "method": "iobuf_set_options", 00:24:26.376 "params": { 00:24:26.376 "small_pool_count": 8192, 00:24:26.376 "large_pool_count": 1024, 00:24:26.376 "small_bufsize": 8192, 00:24:26.376 "large_bufsize": 135168 00:24:26.376 } 00:24:26.376 } 00:24:26.376 ] 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "subsystem": "sock", 00:24:26.376 "config": [ 00:24:26.376 { 00:24:26.376 "method": "sock_set_default_impl", 00:24:26.376 "params": { 00:24:26.376 "impl_name": "posix" 00:24:26.376 } 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "method": "sock_impl_set_options", 00:24:26.376 "params": { 00:24:26.376 "impl_name": "ssl", 00:24:26.376 "recv_buf_size": 4096, 00:24:26.376 "send_buf_size": 4096, 00:24:26.376 "enable_recv_pipe": true, 00:24:26.376 "enable_quickack": false, 00:24:26.376 "enable_placement_id": 0, 00:24:26.376 "enable_zerocopy_send_server": true, 00:24:26.376 "enable_zerocopy_send_client": false, 00:24:26.376 "zerocopy_threshold": 0, 00:24:26.376 "tls_version": 0, 00:24:26.376 "enable_ktls": false 00:24:26.376 } 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "method": "sock_impl_set_options", 00:24:26.376 "params": { 00:24:26.376 "impl_name": "posix", 00:24:26.376 "recv_buf_size": 2097152, 00:24:26.376 "send_buf_size": 2097152, 00:24:26.376 "enable_recv_pipe": true, 00:24:26.376 "enable_quickack": false, 00:24:26.376 "enable_placement_id": 0, 00:24:26.376 "enable_zerocopy_send_server": true, 00:24:26.376 "enable_zerocopy_send_client": false, 00:24:26.376 "zerocopy_threshold": 0, 00:24:26.376 "tls_version": 0, 00:24:26.376 "enable_ktls": false 00:24:26.376 } 00:24:26.376 } 00:24:26.376 ] 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "subsystem": "vmd", 00:24:26.376 "config": [] 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "subsystem": "accel", 00:24:26.376 "config": [ 00:24:26.376 { 00:24:26.376 "method": "accel_set_options", 00:24:26.376 "params": { 00:24:26.376 "small_cache_size": 128, 00:24:26.376 "large_cache_size": 16, 00:24:26.376 "task_count": 2048, 00:24:26.376 "sequence_count": 2048, 00:24:26.376 "buf_count": 2048 00:24:26.376 } 00:24:26.376 } 00:24:26.376 ] 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "subsystem": "bdev", 00:24:26.376 "config": [ 00:24:26.376 { 00:24:26.376 "method": "bdev_set_options", 00:24:26.376 "params": { 00:24:26.376 "bdev_io_pool_size": 65535, 00:24:26.376 "bdev_io_cache_size": 256, 00:24:26.376 "bdev_auto_examine": true, 00:24:26.376 "iobuf_small_cache_size": 128, 00:24:26.376 "iobuf_large_cache_size": 16 00:24:26.376 } 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "method": "bdev_raid_set_options", 00:24:26.376 "params": { 00:24:26.376 "process_window_size_kb": 1024, 00:24:26.376 "process_max_bandwidth_mb_sec": 0 00:24:26.376 } 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "method": "bdev_iscsi_set_options", 00:24:26.376 "params": { 00:24:26.376 "timeout_sec": 30 00:24:26.376 } 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "method": "bdev_nvme_set_options", 00:24:26.376 "params": { 00:24:26.376 "action_on_timeout": "none", 00:24:26.376 "timeout_us": 0, 00:24:26.376 "timeout_admin_us": 0, 00:24:26.376 "keep_alive_timeout_ms": 10000, 00:24:26.376 "arbitration_burst": 0, 00:24:26.376 "low_priority_weight": 0, 00:24:26.376 "medium_priority_weight": 0, 00:24:26.376 "high_priority_weight": 0, 00:24:26.376 "nvme_adminq_poll_period_us": 10000, 00:24:26.376 "nvme_ioq_poll_period_us": 0, 00:24:26.376 "io_queue_requests": 0, 00:24:26.376 "delay_cmd_submit": true, 00:24:26.376 "transport_retry_count": 4, 00:24:26.376 "bdev_retry_count": 3, 00:24:26.376 "transport_ack_timeout": 0, 00:24:26.376 "ctrlr_loss_timeout_sec": 0, 00:24:26.376 "reconnect_delay_sec": 0, 00:24:26.376 "fast_io_fail_timeout_sec": 0, 00:24:26.376 "disable_auto_failback": false, 00:24:26.376 "generate_uuids": false, 00:24:26.376 "transport_tos": 0, 00:24:26.376 "nvme_error_stat": false, 00:24:26.376 "rdma_srq_size": 0, 00:24:26.376 "io_path_stat": false, 00:24:26.376 "allow_accel_sequence": false, 00:24:26.376 "rdma_max_cq_size": 0, 00:24:26.376 "rdma_cm_event_timeout_ms": 0, 00:24:26.376 "dhchap_digests": [ 00:24:26.376 "sha256", 00:24:26.376 "sha384", 00:24:26.376 "sha512" 00:24:26.376 ], 00:24:26.376 "dhchap_dhgroups": [ 00:24:26.376 "null", 00:24:26.376 "ffdhe2048", 00:24:26.376 "ffdhe3072", 00:24:26.376 "ffdhe4096", 00:24:26.376 "ffdhe6144", 00:24:26.376 "ffdhe8192" 00:24:26.376 ] 00:24:26.376 } 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "method": "bdev_nvme_set_hotplug", 00:24:26.376 "params": { 00:24:26.376 "period_us": 100000, 00:24:26.376 "enable": false 00:24:26.376 } 00:24:26.376 }, 00:24:26.376 { 00:24:26.376 "method": "bdev_malloc_create", 00:24:26.376 "params": { 00:24:26.376 "name": "malloc0", 00:24:26.376 "num_blocks": 8192, 00:24:26.376 "block_size": 4096, 00:24:26.376 "physical_block_size": 4096, 00:24:26.376 "uuid": "77b64a46-fe16-4c23-a23e-96f2f480b19a", 00:24:26.376 "optimal_io_boundary": 0, 00:24:26.376 "md_size": 0, 00:24:26.376 "dif_type": 0, 00:24:26.376 "dif_is_head_of_md": false, 00:24:26.376 "dif_pi_format": 0 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "bdev_wait_for_examine" 00:24:26.377 } 00:24:26.377 ] 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "subsystem": "nbd", 00:24:26.377 "config": [] 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "subsystem": "scheduler", 00:24:26.377 "config": [ 00:24:26.377 { 00:24:26.377 "method": "framework_set_scheduler", 00:24:26.377 "params": { 00:24:26.377 "name": "static" 00:24:26.377 } 00:24:26.377 } 00:24:26.377 ] 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "subsystem": "nvmf", 00:24:26.377 "config": [ 00:24:26.377 { 00:24:26.377 "method": "nvmf_set_config", 00:24:26.377 "params": { 00:24:26.377 "discovery_filter": "match_any", 00:24:26.377 "admin_cmd_passthru": { 00:24:26.377 "identify_ctrlr": false 00:24:26.377 }, 00:24:26.377 "dhchap_digests": [ 00:24:26.377 "sha256", 00:24:26.377 "sha384", 00:24:26.377 "sha512" 00:24:26.377 ], 00:24:26.377 "dhchap_dhgroups": [ 00:24:26.377 "null", 00:24:26.377 "ffdhe2048", 00:24:26.377 "ffdhe3072", 00:24:26.377 "ffdhe4096", 00:24:26.377 "ffdhe6144", 00:24:26.377 "ffdhe8192" 00:24:26.377 ] 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "nvmf_set_max_subsystems", 00:24:26.377 "params": { 00:24:26.377 "max_subsystems": 1024 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "nvmf_set_crdt", 00:24:26.377 "params": { 00:24:26.377 "crdt1": 0, 00:24:26.377 "crdt2": 0, 00:24:26.377 "crdt3": 0 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "nvmf_create_transport", 00:24:26.377 "params": { 00:24:26.377 "trtype": "TCP", 00:24:26.377 "max_queue_depth": 128, 00:24:26.377 "max_io_qpairs_per_ctrlr": 127, 00:24:26.377 "in_capsule_data_size": 4096, 00:24:26.377 "max_io_size": 131072, 00:24:26.377 "io_unit_size": 131072, 00:24:26.377 "max_aq_depth": 128, 00:24:26.377 "num_shared_buffers": 511, 00:24:26.377 "buf_cache_size": 4294967295, 00:24:26.377 "dif_insert_or_strip": false, 00:24:26.377 "zcopy": false, 00:24:26.377 "c2h_success": false, 00:24:26.377 "sock_priority": 0, 00:24:26.377 "abort_timeout_sec": 1, 00:24:26.377 "ack_timeout": 0, 00:24:26.377 "data_wr_pool_size": 0 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "nvmf_create_subsystem", 00:24:26.377 "params": { 00:24:26.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.377 "allow_any_host": false, 00:24:26.377 "serial_number": "SPDK00000000000001", 00:24:26.377 "model_number": "SPDK bdev Controller", 00:24:26.377 "max_namespaces": 10, 00:24:26.377 "min_cntlid": 1, 00:24:26.377 "max_cntlid": 65519, 00:24:26.377 "ana_reporting": false 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "nvmf_subsystem_add_host", 00:24:26.377 "params": { 00:24:26.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.377 "host": "nqn.2016-06.io.spdk:host1", 00:24:26.377 "psk": "key0" 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "nvmf_subsystem_add_ns", 00:24:26.377 "params": { 00:24:26.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.377 "namespace": { 00:24:26.377 "nsid": 1, 00:24:26.377 "bdev_name": "malloc0", 00:24:26.377 "nguid": "77B64A46FE164C23A23E96F2F480B19A", 00:24:26.377 "uuid": "77b64a46-fe16-4c23-a23e-96f2f480b19a", 00:24:26.377 "no_auto_visible": false 00:24:26.377 } 00:24:26.377 } 00:24:26.377 }, 00:24:26.377 { 00:24:26.377 "method": "nvmf_subsystem_add_listener", 00:24:26.377 "params": { 00:24:26.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.377 "listen_address": { 00:24:26.377 "trtype": "TCP", 00:24:26.377 "adrfam": "IPv4", 00:24:26.377 "traddr": "10.0.0.2", 00:24:26.377 "trsvcid": "4420" 00:24:26.377 }, 00:24:26.377 "secure_channel": true 00:24:26.377 } 00:24:26.377 } 00:24:26.377 ] 00:24:26.377 } 00:24:26.377 ] 00:24:26.377 }' 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3186565 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3186565 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3186565 ']' 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.377 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.377 [2024-10-01 15:42:05.692722] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:26.377 [2024-10-01 15:42:05.692779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.377 [2024-10-01 15:42:05.729492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:26.377 [2024-10-01 15:42:05.774841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.377 [2024-10-01 15:42:05.803389] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.377 [2024-10-01 15:42:05.803422] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.377 [2024-10-01 15:42:05.803428] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.377 [2024-10-01 15:42:05.803433] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.377 [2024-10-01 15:42:05.803437] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.377 [2024-10-01 15:42:05.803482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.638 [2024-10-01 15:42:05.999697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.638 [2024-10-01 15:42:06.031691] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.638 [2024-10-01 15:42:06.031898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3186771 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3186771 /var/tmp/bdevperf.sock 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3186771 ']' 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.210 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:27.210 "subsystems": [ 00:24:27.210 { 00:24:27.210 "subsystem": "keyring", 00:24:27.210 "config": [ 00:24:27.210 { 00:24:27.210 "method": "keyring_file_add_key", 00:24:27.210 "params": { 00:24:27.210 "name": "key0", 00:24:27.210 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:27.210 } 00:24:27.210 } 00:24:27.210 ] 00:24:27.210 }, 00:24:27.210 { 00:24:27.211 "subsystem": "iobuf", 00:24:27.211 "config": [ 00:24:27.211 { 00:24:27.211 "method": "iobuf_set_options", 00:24:27.211 "params": { 00:24:27.211 "small_pool_count": 8192, 00:24:27.211 "large_pool_count": 1024, 00:24:27.211 "small_bufsize": 8192, 00:24:27.211 "large_bufsize": 135168 00:24:27.211 } 00:24:27.211 } 00:24:27.211 ] 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "subsystem": "sock", 00:24:27.211 "config": [ 00:24:27.211 { 00:24:27.211 "method": "sock_set_default_impl", 00:24:27.211 "params": { 00:24:27.211 "impl_name": "posix" 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "sock_impl_set_options", 00:24:27.211 "params": { 00:24:27.211 "impl_name": "ssl", 00:24:27.211 "recv_buf_size": 4096, 00:24:27.211 "send_buf_size": 4096, 00:24:27.211 "enable_recv_pipe": true, 00:24:27.211 "enable_quickack": false, 00:24:27.211 "enable_placement_id": 0, 00:24:27.211 "enable_zerocopy_send_server": true, 00:24:27.211 "enable_zerocopy_send_client": false, 00:24:27.211 "zerocopy_threshold": 0, 00:24:27.211 "tls_version": 0, 00:24:27.211 "enable_ktls": false 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "sock_impl_set_options", 00:24:27.211 "params": { 00:24:27.211 "impl_name": "posix", 00:24:27.211 "recv_buf_size": 2097152, 00:24:27.211 "send_buf_size": 2097152, 00:24:27.211 "enable_recv_pipe": true, 00:24:27.211 "enable_quickack": false, 00:24:27.211 "enable_placement_id": 0, 00:24:27.211 "enable_zerocopy_send_server": true, 00:24:27.211 "enable_zerocopy_send_client": false, 00:24:27.211 "zerocopy_threshold": 0, 00:24:27.211 "tls_version": 0, 00:24:27.211 "enable_ktls": false 00:24:27.211 } 00:24:27.211 } 00:24:27.211 ] 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "subsystem": "vmd", 00:24:27.211 "config": [] 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "subsystem": "accel", 00:24:27.211 "config": [ 00:24:27.211 { 00:24:27.211 "method": "accel_set_options", 00:24:27.211 "params": { 00:24:27.211 "small_cache_size": 128, 00:24:27.211 "large_cache_size": 16, 00:24:27.211 "task_count": 2048, 00:24:27.211 "sequence_count": 2048, 00:24:27.211 "buf_count": 2048 00:24:27.211 } 00:24:27.211 } 00:24:27.211 ] 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "subsystem": "bdev", 00:24:27.211 "config": [ 00:24:27.211 { 00:24:27.211 "method": "bdev_set_options", 00:24:27.211 "params": { 00:24:27.211 "bdev_io_pool_size": 65535, 00:24:27.211 "bdev_io_cache_size": 256, 00:24:27.211 "bdev_auto_examine": true, 00:24:27.211 "iobuf_small_cache_size": 128, 00:24:27.211 "iobuf_large_cache_size": 16 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "bdev_raid_set_options", 00:24:27.211 "params": { 00:24:27.211 "process_window_size_kb": 1024, 00:24:27.211 "process_max_bandwidth_mb_sec": 0 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "bdev_iscsi_set_options", 00:24:27.211 "params": { 00:24:27.211 "timeout_sec": 30 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "bdev_nvme_set_options", 00:24:27.211 "params": { 00:24:27.211 "action_on_timeout": "none", 00:24:27.211 "timeout_us": 0, 00:24:27.211 "timeout_admin_us": 0, 00:24:27.211 "keep_alive_timeout_ms": 10000, 00:24:27.211 "arbitration_burst": 0, 00:24:27.211 "low_priority_weight": 0, 00:24:27.211 "medium_priority_weight": 0, 00:24:27.211 "high_priority_weight": 0, 00:24:27.211 "nvme_adminq_poll_period_us": 10000, 00:24:27.211 "nvme_ioq_poll_period_us": 0, 00:24:27.211 "io_queue_requests": 512, 00:24:27.211 "delay_cmd_submit": true, 00:24:27.211 "transport_retry_count": 4, 00:24:27.211 "bdev_retry_count": 3, 00:24:27.211 "transport_ack_timeout": 0, 00:24:27.211 "ctrlr_loss_timeout_sec": 0, 00:24:27.211 "reconnect_delay_sec": 0, 00:24:27.211 "fast_io_fail_timeout_sec": 0, 00:24:27.211 "disable_auto_failback": false, 00:24:27.211 "generate_uuids": false, 00:24:27.211 "transport_tos": 0, 00:24:27.211 "nvme_error_stat": false, 00:24:27.211 "rdma_srq_size": 0, 00:24:27.211 "io_path_stat": false, 00:24:27.211 "allow_accel_sequence": false, 00:24:27.211 "rdma_max_cq_size": 0, 00:24:27.211 "rdma_cm_event_timeout_ms": 0, 00:24:27.211 "dhchap_digests": [ 00:24:27.211 "sha256", 00:24:27.211 "sha384", 00:24:27.211 "sha512" 00:24:27.211 ], 00:24:27.211 "dhchap_dhgroups": [ 00:24:27.211 "null", 00:24:27.211 "ffdhe2048", 00:24:27.211 "ffdhe3072", 00:24:27.211 "ffdhe4096", 00:24:27.211 "ffdhe6144", 00:24:27.211 "ffdhe8192" 00:24:27.211 ] 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "bdev_nvme_attach_controller", 00:24:27.211 "params": { 00:24:27.211 "name": "TLSTEST", 00:24:27.211 "trtype": "TCP", 00:24:27.211 "adrfam": "IPv4", 00:24:27.211 "traddr": "10.0.0.2", 00:24:27.211 "trsvcid": "4420", 00:24:27.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.211 "prchk_reftag": false, 00:24:27.211 "prchk_guard": false, 00:24:27.211 "ctrlr_loss_timeout_sec": 0, 00:24:27.211 "reconnect_delay_sec": 0, 00:24:27.211 "fast_io_fail_timeout_sec": 0, 00:24:27.211 "psk": "key0", 00:24:27.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:27.211 "hdgst": false, 00:24:27.211 "ddgst": false 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "bdev_nvme_set_hotplug", 00:24:27.211 "params": { 00:24:27.211 "period_us": 100000, 00:24:27.211 "enable": false 00:24:27.211 } 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "method": "bdev_wait_for_examine" 00:24:27.211 } 00:24:27.211 ] 00:24:27.211 }, 00:24:27.211 { 00:24:27.211 "subsystem": "nbd", 00:24:27.211 "config": [] 00:24:27.211 } 00:24:27.211 ] 00:24:27.211 }' 00:24:27.211 [2024-10-01 15:42:06.574236] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:27.211 [2024-10-01 15:42:06.574292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186771 ] 00:24:27.211 [2024-10-01 15:42:06.605342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:27.211 [2024-10-01 15:42:06.650772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.472 [2024-10-01 15:42:06.679053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.472 [2024-10-01 15:42:06.807709] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.043 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.043 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:28.043 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:28.043 Running I/O for 10 seconds... 00:24:38.178 6044.00 IOPS, 23.61 MiB/s 5949.00 IOPS, 23.24 MiB/s 5611.00 IOPS, 21.92 MiB/s 5658.75 IOPS, 22.10 MiB/s 5689.40 IOPS, 22.22 MiB/s 5748.67 IOPS, 22.46 MiB/s 5786.14 IOPS, 22.60 MiB/s 5719.88 IOPS, 22.34 MiB/s 5761.33 IOPS, 22.51 MiB/s 5754.90 IOPS, 22.48 MiB/s 00:24:38.178 Latency(us) 00:24:38.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:38.178 Verification LBA range: start 0x0 length 0x2000 00:24:38.178 TLSTESTn1 : 10.01 5759.88 22.50 0.00 0.00 22193.14 5625.17 41069.23 00:24:38.178 =================================================================================================================== 00:24:38.178 Total : 5759.88 22.50 0.00 0.00 22193.14 5625.17 41069.23 00:24:38.178 { 00:24:38.178 "results": [ 00:24:38.178 { 00:24:38.178 "job": "TLSTESTn1", 00:24:38.178 "core_mask": "0x4", 00:24:38.178 "workload": "verify", 00:24:38.178 "status": "finished", 00:24:38.178 "verify_range": { 00:24:38.178 "start": 0, 00:24:38.178 "length": 8192 00:24:38.178 }, 00:24:38.178 "queue_depth": 128, 00:24:38.178 "io_size": 4096, 00:24:38.178 "runtime": 10.013573, 00:24:38.178 "iops": 5759.88211200937, 00:24:38.178 "mibps": 22.4995395000366, 00:24:38.178 "io_failed": 0, 00:24:38.178 "io_timeout": 0, 00:24:38.178 "avg_latency_us": 22193.13979113569, 00:24:38.178 "min_latency_us": 5625.173333333333, 00:24:38.178 "max_latency_us": 41069.22666666667 00:24:38.178 } 00:24:38.178 ], 00:24:38.178 "core_count": 1 00:24:38.178 } 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3186771 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3186771 ']' 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3186771 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3186771 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3186771' 00:24:38.178 killing process with pid 3186771 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3186771 00:24:38.178 Received shutdown signal, test time was about 10.000000 seconds 00:24:38.178 00:24:38.178 Latency(us) 00:24:38.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.178 =================================================================================================================== 00:24:38.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.178 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3186771 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3186565 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3186565 ']' 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3186565 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3186565 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3186565' 00:24:38.439 killing process with pid 3186565 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3186565 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3186565 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3189410 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3189410 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3189410 ']' 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.439 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.700 [2024-10-01 15:42:17.928006] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:38.700 [2024-10-01 15:42:17.928059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.700 [2024-10-01 15:42:17.964110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:38.700 [2024-10-01 15:42:18.011986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.700 [2024-10-01 15:42:18.043489] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.700 [2024-10-01 15:42:18.043534] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.700 [2024-10-01 15:42:18.043542] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.700 [2024-10-01 15:42:18.043549] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.700 [2024-10-01 15:42:18.043555] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.700 [2024-10-01 15:42:18.043577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ULqIJsJpZD 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ULqIJsJpZD 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:39.639 [2024-10-01 15:42:18.935233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.639 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:39.898 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:39.898 [2024-10-01 15:42:19.300141] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.898 [2024-10-01 15:42:19.300390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.898 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:40.157 malloc0 00:24:40.157 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:40.416 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:40.416 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3189779 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3189779 /var/tmp/bdevperf.sock 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3189779 ']' 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.677 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.677 [2024-10-01 15:42:20.105427] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:40.677 [2024-10-01 15:42:20.105499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189779 ] 00:24:40.939 [2024-10-01 15:42:20.140002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:40.939 [2024-10-01 15:42:20.188538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.939 [2024-10-01 15:42:20.221044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.939 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.939 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:40.939 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:41.200 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:41.200 [2024-10-01 15:42:20.636328] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.461 nvme0n1 00:24:41.461 15:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.461 Running I/O for 1 seconds... 00:24:42.402 4468.00 IOPS, 17.45 MiB/s 00:24:42.402 Latency(us) 00:24:42.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.402 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:42.402 Verification LBA range: start 0x0 length 0x2000 00:24:42.402 nvme0n1 : 1.02 4504.74 17.60 0.00 0.00 28169.54 4614.83 67283.63 00:24:42.402 =================================================================================================================== 00:24:42.402 Total : 4504.74 17.60 0.00 0.00 28169.54 4614.83 67283.63 00:24:42.402 { 00:24:42.402 "results": [ 00:24:42.402 { 00:24:42.402 "job": "nvme0n1", 00:24:42.402 "core_mask": "0x2", 00:24:42.402 "workload": "verify", 00:24:42.402 "status": "finished", 00:24:42.402 "verify_range": { 00:24:42.402 "start": 0, 00:24:42.402 "length": 8192 00:24:42.402 }, 00:24:42.402 "queue_depth": 128, 00:24:42.402 "io_size": 4096, 00:24:42.402 "runtime": 1.020481, 00:24:42.402 "iops": 4504.738451769313, 00:24:42.402 "mibps": 17.596634577223877, 00:24:42.402 "io_failed": 0, 00:24:42.402 "io_timeout": 0, 00:24:42.402 "avg_latency_us": 28169.54426510043, 00:24:42.402 "min_latency_us": 4614.826666666667, 00:24:42.402 "max_latency_us": 67283.62666666666 00:24:42.402 } 00:24:42.402 ], 00:24:42.402 "core_count": 1 00:24:42.402 } 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3189779 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3189779 ']' 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3189779 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189779 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189779' 00:24:42.664 killing process with pid 3189779 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3189779 00:24:42.664 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.664 00:24:42.664 Latency(us) 00:24:42.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.664 =================================================================================================================== 00:24:42.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.664 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3189779 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3189410 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3189410 ']' 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3189410 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189410 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189410' 00:24:42.664 killing process with pid 3189410 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3189410 00:24:42.664 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3189410 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3190305 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3190305 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3190305 ']' 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.926 15:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.926 [2024-10-01 15:42:22.311054] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:42.926 [2024-10-01 15:42:22.311111] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.926 [2024-10-01 15:42:22.350120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:43.187 [2024-10-01 15:42:22.399736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.187 [2024-10-01 15:42:22.445624] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.187 [2024-10-01 15:42:22.445679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.187 [2024-10-01 15:42:22.445688] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.187 [2024-10-01 15:42:22.445701] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.187 [2024-10-01 15:42:22.445707] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.187 [2024-10-01 15:42:22.445733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.760 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.760 [2024-10-01 15:42:23.184625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.760 malloc0 00:24:44.021 [2024-10-01 15:42:23.224550] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.021 [2024-10-01 15:42:23.224783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3190476 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3190476 /var/tmp/bdevperf.sock 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3190476 ']' 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.021 15:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.021 [2024-10-01 15:42:23.312608] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:44.021 [2024-10-01 15:42:23.312661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190476 ] 00:24:44.021 [2024-10-01 15:42:23.343983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:44.021 [2024-10-01 15:42:23.390995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.021 [2024-10-01 15:42:23.420205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.975 15:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.975 15:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:44.975 15:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ULqIJsJpZD 00:24:44.975 15:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:44.975 [2024-10-01 15:42:24.430407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.236 nvme0n1 00:24:45.236 15:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.236 Running I/O for 1 seconds... 00:24:46.178 4533.00 IOPS, 17.71 MiB/s 00:24:46.178 Latency(us) 00:24:46.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.178 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:46.178 Verification LBA range: start 0x0 length 0x2000 00:24:46.178 nvme0n1 : 1.01 4608.21 18.00 0.00 0.00 27612.04 4724.05 28180.48 00:24:46.178 =================================================================================================================== 00:24:46.178 Total : 4608.21 18.00 0.00 0.00 27612.04 4724.05 28180.48 00:24:46.178 { 00:24:46.178 "results": [ 00:24:46.178 { 00:24:46.178 "job": "nvme0n1", 00:24:46.178 "core_mask": "0x2", 00:24:46.178 "workload": "verify", 00:24:46.178 "status": "finished", 00:24:46.178 "verify_range": { 00:24:46.178 "start": 0, 00:24:46.178 "length": 8192 00:24:46.178 }, 00:24:46.178 "queue_depth": 128, 00:24:46.178 "io_size": 4096, 00:24:46.178 "runtime": 1.011672, 00:24:46.178 "iops": 4608.212938580884, 00:24:46.178 "mibps": 18.000831791331578, 00:24:46.178 "io_failed": 0, 00:24:46.178 "io_timeout": 0, 00:24:46.178 "avg_latency_us": 27612.03919347919, 00:24:46.178 "min_latency_us": 4724.053333333333, 00:24:46.178 "max_latency_us": 28180.48 00:24:46.178 } 00:24:46.178 ], 00:24:46.178 "core_count": 1 00:24:46.178 } 00:24:46.441 15:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:46.441 15:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.441 15:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.441 15:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.441 15:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:46.441 "subsystems": [ 00:24:46.441 { 00:24:46.441 "subsystem": "keyring", 00:24:46.441 "config": [ 00:24:46.441 { 00:24:46.441 "method": "keyring_file_add_key", 00:24:46.441 "params": { 00:24:46.441 "name": "key0", 00:24:46.441 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:46.441 } 00:24:46.441 } 00:24:46.441 ] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "iobuf", 00:24:46.441 "config": [ 00:24:46.441 { 00:24:46.441 "method": "iobuf_set_options", 00:24:46.441 "params": { 00:24:46.441 "small_pool_count": 8192, 00:24:46.441 "large_pool_count": 1024, 00:24:46.441 "small_bufsize": 8192, 00:24:46.441 "large_bufsize": 135168 00:24:46.441 } 00:24:46.441 } 00:24:46.441 ] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "sock", 00:24:46.441 "config": [ 00:24:46.441 { 00:24:46.441 "method": "sock_set_default_impl", 00:24:46.441 "params": { 00:24:46.441 "impl_name": "posix" 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "sock_impl_set_options", 00:24:46.441 "params": { 00:24:46.441 "impl_name": "ssl", 00:24:46.441 "recv_buf_size": 4096, 00:24:46.441 "send_buf_size": 4096, 00:24:46.441 "enable_recv_pipe": true, 00:24:46.441 "enable_quickack": false, 00:24:46.441 "enable_placement_id": 0, 00:24:46.441 "enable_zerocopy_send_server": true, 00:24:46.441 "enable_zerocopy_send_client": false, 00:24:46.441 "zerocopy_threshold": 0, 00:24:46.441 "tls_version": 0, 00:24:46.441 "enable_ktls": false 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "sock_impl_set_options", 00:24:46.441 "params": { 00:24:46.441 "impl_name": "posix", 00:24:46.441 "recv_buf_size": 2097152, 00:24:46.441 "send_buf_size": 2097152, 00:24:46.441 "enable_recv_pipe": true, 00:24:46.441 "enable_quickack": false, 00:24:46.441 "enable_placement_id": 0, 00:24:46.441 "enable_zerocopy_send_server": true, 00:24:46.441 "enable_zerocopy_send_client": false, 00:24:46.441 "zerocopy_threshold": 0, 00:24:46.441 "tls_version": 0, 00:24:46.441 "enable_ktls": false 00:24:46.441 } 00:24:46.441 } 00:24:46.441 ] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "vmd", 00:24:46.441 "config": [] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "accel", 00:24:46.441 "config": [ 00:24:46.441 { 00:24:46.441 "method": "accel_set_options", 00:24:46.441 "params": { 00:24:46.441 "small_cache_size": 128, 00:24:46.441 "large_cache_size": 16, 00:24:46.441 "task_count": 2048, 00:24:46.441 "sequence_count": 2048, 00:24:46.441 "buf_count": 2048 00:24:46.441 } 00:24:46.441 } 00:24:46.441 ] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "bdev", 00:24:46.441 "config": [ 00:24:46.441 { 00:24:46.441 "method": "bdev_set_options", 00:24:46.441 "params": { 00:24:46.441 "bdev_io_pool_size": 65535, 00:24:46.441 "bdev_io_cache_size": 256, 00:24:46.441 "bdev_auto_examine": true, 00:24:46.441 "iobuf_small_cache_size": 128, 00:24:46.441 "iobuf_large_cache_size": 16 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "bdev_raid_set_options", 00:24:46.441 "params": { 00:24:46.441 "process_window_size_kb": 1024, 00:24:46.441 "process_max_bandwidth_mb_sec": 0 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "bdev_iscsi_set_options", 00:24:46.441 "params": { 00:24:46.441 "timeout_sec": 30 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "bdev_nvme_set_options", 00:24:46.441 "params": { 00:24:46.441 "action_on_timeout": "none", 00:24:46.441 "timeout_us": 0, 00:24:46.441 "timeout_admin_us": 0, 00:24:46.441 "keep_alive_timeout_ms": 10000, 00:24:46.441 "arbitration_burst": 0, 00:24:46.441 "low_priority_weight": 0, 00:24:46.441 "medium_priority_weight": 0, 00:24:46.441 "high_priority_weight": 0, 00:24:46.441 "nvme_adminq_poll_period_us": 10000, 00:24:46.441 "nvme_ioq_poll_period_us": 0, 00:24:46.441 "io_queue_requests": 0, 00:24:46.441 "delay_cmd_submit": true, 00:24:46.441 "transport_retry_count": 4, 00:24:46.441 "bdev_retry_count": 3, 00:24:46.441 "transport_ack_timeout": 0, 00:24:46.441 "ctrlr_loss_timeout_sec": 0, 00:24:46.441 "reconnect_delay_sec": 0, 00:24:46.441 "fast_io_fail_timeout_sec": 0, 00:24:46.441 "disable_auto_failback": false, 00:24:46.441 "generate_uuids": false, 00:24:46.441 "transport_tos": 0, 00:24:46.441 "nvme_error_stat": false, 00:24:46.441 "rdma_srq_size": 0, 00:24:46.441 "io_path_stat": false, 00:24:46.441 "allow_accel_sequence": false, 00:24:46.441 "rdma_max_cq_size": 0, 00:24:46.441 "rdma_cm_event_timeout_ms": 0, 00:24:46.441 "dhchap_digests": [ 00:24:46.441 "sha256", 00:24:46.441 "sha384", 00:24:46.441 "sha512" 00:24:46.441 ], 00:24:46.441 "dhchap_dhgroups": [ 00:24:46.441 "null", 00:24:46.441 "ffdhe2048", 00:24:46.441 "ffdhe3072", 00:24:46.441 "ffdhe4096", 00:24:46.441 "ffdhe6144", 00:24:46.441 "ffdhe8192" 00:24:46.441 ] 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "bdev_nvme_set_hotplug", 00:24:46.441 "params": { 00:24:46.441 "period_us": 100000, 00:24:46.441 "enable": false 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "bdev_malloc_create", 00:24:46.441 "params": { 00:24:46.441 "name": "malloc0", 00:24:46.441 "num_blocks": 8192, 00:24:46.441 "block_size": 4096, 00:24:46.441 "physical_block_size": 4096, 00:24:46.441 "uuid": "665f2e2c-1a84-49c5-9df3-958e4e15d3b3", 00:24:46.441 "optimal_io_boundary": 0, 00:24:46.441 "md_size": 0, 00:24:46.441 "dif_type": 0, 00:24:46.441 "dif_is_head_of_md": false, 00:24:46.441 "dif_pi_format": 0 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "bdev_wait_for_examine" 00:24:46.441 } 00:24:46.441 ] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "nbd", 00:24:46.441 "config": [] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "scheduler", 00:24:46.441 "config": [ 00:24:46.441 { 00:24:46.441 "method": "framework_set_scheduler", 00:24:46.441 "params": { 00:24:46.441 "name": "static" 00:24:46.441 } 00:24:46.441 } 00:24:46.441 ] 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "subsystem": "nvmf", 00:24:46.441 "config": [ 00:24:46.441 { 00:24:46.441 "method": "nvmf_set_config", 00:24:46.441 "params": { 00:24:46.441 "discovery_filter": "match_any", 00:24:46.441 "admin_cmd_passthru": { 00:24:46.441 "identify_ctrlr": false 00:24:46.441 }, 00:24:46.441 "dhchap_digests": [ 00:24:46.441 "sha256", 00:24:46.441 "sha384", 00:24:46.441 "sha512" 00:24:46.441 ], 00:24:46.441 "dhchap_dhgroups": [ 00:24:46.441 "null", 00:24:46.441 "ffdhe2048", 00:24:46.441 "ffdhe3072", 00:24:46.441 "ffdhe4096", 00:24:46.441 "ffdhe6144", 00:24:46.441 "ffdhe8192" 00:24:46.441 ] 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "nvmf_set_max_subsystems", 00:24:46.441 "params": { 00:24:46.441 "max_subsystems": 1024 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "nvmf_set_crdt", 00:24:46.441 "params": { 00:24:46.441 "crdt1": 0, 00:24:46.441 "crdt2": 0, 00:24:46.441 "crdt3": 0 00:24:46.441 } 00:24:46.441 }, 00:24:46.441 { 00:24:46.441 "method": "nvmf_create_transport", 00:24:46.441 "params": { 00:24:46.441 "trtype": "TCP", 00:24:46.441 "max_queue_depth": 128, 00:24:46.441 "max_io_qpairs_per_ctrlr": 127, 00:24:46.442 "in_capsule_data_size": 4096, 00:24:46.442 "max_io_size": 131072, 00:24:46.442 "io_unit_size": 131072, 00:24:46.442 "max_aq_depth": 128, 00:24:46.442 "num_shared_buffers": 511, 00:24:46.442 "buf_cache_size": 4294967295, 00:24:46.442 "dif_insert_or_strip": false, 00:24:46.442 "zcopy": false, 00:24:46.442 "c2h_success": false, 00:24:46.442 "sock_priority": 0, 00:24:46.442 "abort_timeout_sec": 1, 00:24:46.442 "ack_timeout": 0, 00:24:46.442 "data_wr_pool_size": 0 00:24:46.442 } 00:24:46.442 }, 00:24:46.442 { 00:24:46.442 "method": "nvmf_create_subsystem", 00:24:46.442 "params": { 00:24:46.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.442 "allow_any_host": false, 00:24:46.442 "serial_number": "00000000000000000000", 00:24:46.442 "model_number": "SPDK bdev Controller", 00:24:46.442 "max_namespaces": 32, 00:24:46.442 "min_cntlid": 1, 00:24:46.442 "max_cntlid": 65519, 00:24:46.442 "ana_reporting": false 00:24:46.442 } 00:24:46.442 }, 00:24:46.442 { 00:24:46.442 "method": "nvmf_subsystem_add_host", 00:24:46.442 "params": { 00:24:46.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.442 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.442 "psk": "key0" 00:24:46.442 } 00:24:46.442 }, 00:24:46.442 { 00:24:46.442 "method": "nvmf_subsystem_add_ns", 00:24:46.442 "params": { 00:24:46.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.442 "namespace": { 00:24:46.442 "nsid": 1, 00:24:46.442 "bdev_name": "malloc0", 00:24:46.442 "nguid": "665F2E2C1A8449C59DF3958E4E15D3B3", 00:24:46.442 "uuid": "665f2e2c-1a84-49c5-9df3-958e4e15d3b3", 00:24:46.442 "no_auto_visible": false 00:24:46.442 } 00:24:46.442 } 00:24:46.442 }, 00:24:46.442 { 00:24:46.442 "method": "nvmf_subsystem_add_listener", 00:24:46.442 "params": { 00:24:46.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.442 "listen_address": { 00:24:46.442 "trtype": "TCP", 00:24:46.442 "adrfam": "IPv4", 00:24:46.442 "traddr": "10.0.0.2", 00:24:46.442 "trsvcid": "4420" 00:24:46.442 }, 00:24:46.442 "secure_channel": false, 00:24:46.442 "sock_impl": "ssl" 00:24:46.442 } 00:24:46.442 } 00:24:46.442 ] 00:24:46.442 } 00:24:46.442 ] 00:24:46.442 }' 00:24:46.442 15:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:46.704 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:46.704 "subsystems": [ 00:24:46.704 { 00:24:46.704 "subsystem": "keyring", 00:24:46.704 "config": [ 00:24:46.704 { 00:24:46.704 "method": "keyring_file_add_key", 00:24:46.704 "params": { 00:24:46.704 "name": "key0", 00:24:46.704 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:46.704 } 00:24:46.704 } 00:24:46.704 ] 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "subsystem": "iobuf", 00:24:46.704 "config": [ 00:24:46.704 { 00:24:46.704 "method": "iobuf_set_options", 00:24:46.704 "params": { 00:24:46.704 "small_pool_count": 8192, 00:24:46.704 "large_pool_count": 1024, 00:24:46.704 "small_bufsize": 8192, 00:24:46.704 "large_bufsize": 135168 00:24:46.704 } 00:24:46.704 } 00:24:46.704 ] 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "subsystem": "sock", 00:24:46.704 "config": [ 00:24:46.704 { 00:24:46.704 "method": "sock_set_default_impl", 00:24:46.704 "params": { 00:24:46.704 "impl_name": "posix" 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "sock_impl_set_options", 00:24:46.704 "params": { 00:24:46.704 "impl_name": "ssl", 00:24:46.704 "recv_buf_size": 4096, 00:24:46.704 "send_buf_size": 4096, 00:24:46.704 "enable_recv_pipe": true, 00:24:46.704 "enable_quickack": false, 00:24:46.704 "enable_placement_id": 0, 00:24:46.704 "enable_zerocopy_send_server": true, 00:24:46.704 "enable_zerocopy_send_client": false, 00:24:46.704 "zerocopy_threshold": 0, 00:24:46.704 "tls_version": 0, 00:24:46.704 "enable_ktls": false 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "sock_impl_set_options", 00:24:46.704 "params": { 00:24:46.704 "impl_name": "posix", 00:24:46.704 "recv_buf_size": 2097152, 00:24:46.704 "send_buf_size": 2097152, 00:24:46.704 "enable_recv_pipe": true, 00:24:46.704 "enable_quickack": false, 00:24:46.704 "enable_placement_id": 0, 00:24:46.704 "enable_zerocopy_send_server": true, 00:24:46.704 "enable_zerocopy_send_client": false, 00:24:46.704 "zerocopy_threshold": 0, 00:24:46.704 "tls_version": 0, 00:24:46.704 "enable_ktls": false 00:24:46.704 } 00:24:46.704 } 00:24:46.704 ] 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "subsystem": "vmd", 00:24:46.704 "config": [] 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "subsystem": "accel", 00:24:46.704 "config": [ 00:24:46.704 { 00:24:46.704 "method": "accel_set_options", 00:24:46.704 "params": { 00:24:46.704 "small_cache_size": 128, 00:24:46.704 "large_cache_size": 16, 00:24:46.704 "task_count": 2048, 00:24:46.704 "sequence_count": 2048, 00:24:46.704 "buf_count": 2048 00:24:46.704 } 00:24:46.704 } 00:24:46.704 ] 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "subsystem": "bdev", 00:24:46.704 "config": [ 00:24:46.704 { 00:24:46.704 "method": "bdev_set_options", 00:24:46.704 "params": { 00:24:46.704 "bdev_io_pool_size": 65535, 00:24:46.704 "bdev_io_cache_size": 256, 00:24:46.704 "bdev_auto_examine": true, 00:24:46.704 "iobuf_small_cache_size": 128, 00:24:46.704 "iobuf_large_cache_size": 16 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "bdev_raid_set_options", 00:24:46.704 "params": { 00:24:46.704 "process_window_size_kb": 1024, 00:24:46.704 "process_max_bandwidth_mb_sec": 0 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "bdev_iscsi_set_options", 00:24:46.704 "params": { 00:24:46.704 "timeout_sec": 30 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "bdev_nvme_set_options", 00:24:46.704 "params": { 00:24:46.704 "action_on_timeout": "none", 00:24:46.704 "timeout_us": 0, 00:24:46.704 "timeout_admin_us": 0, 00:24:46.704 "keep_alive_timeout_ms": 10000, 00:24:46.704 "arbitration_burst": 0, 00:24:46.704 "low_priority_weight": 0, 00:24:46.704 "medium_priority_weight": 0, 00:24:46.704 "high_priority_weight": 0, 00:24:46.704 "nvme_adminq_poll_period_us": 10000, 00:24:46.704 "nvme_ioq_poll_period_us": 0, 00:24:46.704 "io_queue_requests": 512, 00:24:46.704 "delay_cmd_submit": true, 00:24:46.704 "transport_retry_count": 4, 00:24:46.704 "bdev_retry_count": 3, 00:24:46.704 "transport_ack_timeout": 0, 00:24:46.704 "ctrlr_loss_timeout_sec": 0, 00:24:46.704 "reconnect_delay_sec": 0, 00:24:46.704 "fast_io_fail_timeout_sec": 0, 00:24:46.704 "disable_auto_failback": false, 00:24:46.704 "generate_uuids": false, 00:24:46.704 "transport_tos": 0, 00:24:46.704 "nvme_error_stat": false, 00:24:46.704 "rdma_srq_size": 0, 00:24:46.704 "io_path_stat": false, 00:24:46.704 "allow_accel_sequence": false, 00:24:46.704 "rdma_max_cq_size": 0, 00:24:46.704 "rdma_cm_event_timeout_ms": 0, 00:24:46.704 "dhchap_digests": [ 00:24:46.704 "sha256", 00:24:46.704 "sha384", 00:24:46.704 "sha512" 00:24:46.704 ], 00:24:46.704 "dhchap_dhgroups": [ 00:24:46.704 "null", 00:24:46.704 "ffdhe2048", 00:24:46.704 "ffdhe3072", 00:24:46.704 "ffdhe4096", 00:24:46.704 "ffdhe6144", 00:24:46.704 "ffdhe8192" 00:24:46.704 ] 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "bdev_nvme_attach_controller", 00:24:46.704 "params": { 00:24:46.704 "name": "nvme0", 00:24:46.704 "trtype": "TCP", 00:24:46.704 "adrfam": "IPv4", 00:24:46.704 "traddr": "10.0.0.2", 00:24:46.704 "trsvcid": "4420", 00:24:46.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.704 "prchk_reftag": false, 00:24:46.704 "prchk_guard": false, 00:24:46.704 "ctrlr_loss_timeout_sec": 0, 00:24:46.704 "reconnect_delay_sec": 0, 00:24:46.704 "fast_io_fail_timeout_sec": 0, 00:24:46.704 "psk": "key0", 00:24:46.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.704 "hdgst": false, 00:24:46.704 "ddgst": false 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "bdev_nvme_set_hotplug", 00:24:46.704 "params": { 00:24:46.704 "period_us": 100000, 00:24:46.704 "enable": false 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "bdev_enable_histogram", 00:24:46.704 "params": { 00:24:46.704 "name": "nvme0n1", 00:24:46.704 "enable": true 00:24:46.704 } 00:24:46.704 }, 00:24:46.704 { 00:24:46.704 "method": "bdev_wait_for_examine" 00:24:46.704 } 00:24:46.704 ] 00:24:46.705 }, 00:24:46.705 { 00:24:46.705 "subsystem": "nbd", 00:24:46.705 "config": [] 00:24:46.705 } 00:24:46.705 ] 00:24:46.705 }' 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3190476 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3190476 ']' 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3190476 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3190476 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3190476' 00:24:46.705 killing process with pid 3190476 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3190476 00:24:46.705 Received shutdown signal, test time was about 1.000000 seconds 00:24:46.705 00:24:46.705 Latency(us) 00:24:46.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.705 =================================================================================================================== 00:24:46.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.705 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3190476 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3190305 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3190305 ']' 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3190305 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3190305 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3190305' 00:24:46.965 killing process with pid 3190305 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3190305 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3190305 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.965 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:46.965 "subsystems": [ 00:24:46.965 { 00:24:46.965 "subsystem": "keyring", 00:24:46.965 "config": [ 00:24:46.965 { 00:24:46.965 "method": "keyring_file_add_key", 00:24:46.965 "params": { 00:24:46.965 "name": "key0", 00:24:46.965 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:46.965 } 00:24:46.965 } 00:24:46.965 ] 00:24:46.965 }, 00:24:46.965 { 00:24:46.965 "subsystem": "iobuf", 00:24:46.965 "config": [ 00:24:46.965 { 00:24:46.965 "method": "iobuf_set_options", 00:24:46.965 "params": { 00:24:46.965 "small_pool_count": 8192, 00:24:46.965 "large_pool_count": 1024, 00:24:46.965 "small_bufsize": 8192, 00:24:46.965 "large_bufsize": 135168 00:24:46.965 } 00:24:46.965 } 00:24:46.965 ] 00:24:46.965 }, 00:24:46.965 { 00:24:46.965 "subsystem": "sock", 00:24:46.965 "config": [ 00:24:46.965 { 00:24:46.965 "method": "sock_set_default_impl", 00:24:46.966 "params": { 00:24:46.966 "impl_name": "posix" 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "sock_impl_set_options", 00:24:46.966 "params": { 00:24:46.966 "impl_name": "ssl", 00:24:46.966 "recv_buf_size": 4096, 00:24:46.966 "send_buf_size": 4096, 00:24:46.966 "enable_recv_pipe": true, 00:24:46.966 "enable_quickack": false, 00:24:46.966 "enable_placement_id": 0, 00:24:46.966 "enable_zerocopy_send_server": true, 00:24:46.966 "enable_zerocopy_send_client": false, 00:24:46.966 "zerocopy_threshold": 0, 00:24:46.966 "tls_version": 0, 00:24:46.966 "enable_ktls": false 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "sock_impl_set_options", 00:24:46.966 "params": { 00:24:46.966 "impl_name": "posix", 00:24:46.966 "recv_buf_size": 2097152, 00:24:46.966 "send_buf_size": 2097152, 00:24:46.966 "enable_recv_pipe": true, 00:24:46.966 "enable_quickack": false, 00:24:46.966 "enable_placement_id": 0, 00:24:46.966 "enable_zerocopy_send_server": true, 00:24:46.966 "enable_zerocopy_send_client": false, 00:24:46.966 "zerocopy_threshold": 0, 00:24:46.966 "tls_version": 0, 00:24:46.966 "enable_ktls": false 00:24:46.966 } 00:24:46.966 } 00:24:46.966 ] 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "subsystem": "vmd", 00:24:46.966 "config": [] 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "subsystem": "accel", 00:24:46.966 "config": [ 00:24:46.966 { 00:24:46.966 "method": "accel_set_options", 00:24:46.966 "params": { 00:24:46.966 "small_cache_size": 128, 00:24:46.966 "large_cache_size": 16, 00:24:46.966 "task_count": 2048, 00:24:46.966 "sequence_count": 2048, 00:24:46.966 "buf_count": 2048 00:24:46.966 } 00:24:46.966 } 00:24:46.966 ] 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "subsystem": "bdev", 00:24:46.966 "config": [ 00:24:46.966 { 00:24:46.966 "method": "bdev_set_options", 00:24:46.966 "params": { 00:24:46.966 "bdev_io_pool_size": 65535, 00:24:46.966 "bdev_io_cache_size": 256, 00:24:46.966 "bdev_auto_examine": true, 00:24:46.966 "iobuf_small_cache_size": 128, 00:24:46.966 "iobuf_large_cache_size": 16 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "bdev_raid_set_options", 00:24:46.966 "params": { 00:24:46.966 "process_window_size_kb": 1024, 00:24:46.966 "process_max_bandwidth_mb_sec": 0 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "bdev_iscsi_set_options", 00:24:46.966 "params": { 00:24:46.966 "timeout_sec": 30 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "bdev_nvme_set_options", 00:24:46.966 "params": { 00:24:46.966 "action_on_timeout": "none", 00:24:46.966 "timeout_us": 0, 00:24:46.966 "timeout_admin_us": 0, 00:24:46.966 "keep_alive_timeout_ms": 10000, 00:24:46.966 "arbitration_burst": 0, 00:24:46.966 "low_priority_weight": 0, 00:24:46.966 "medium_priority_weight": 0, 00:24:46.966 "high_priority_weight": 0, 00:24:46.966 "nvme_adminq_poll_period_us": 10000, 00:24:46.966 "nvme_ioq_poll_period_us": 0, 00:24:46.966 "io_queue_requests": 0, 00:24:46.966 "delay_cmd_submit": true, 00:24:46.966 "transport_retry_count": 4, 00:24:46.966 "bdev_retry_count": 3, 00:24:46.966 "transport_ack_timeout": 0, 00:24:46.966 "ctrlr_loss_timeout_sec": 0, 00:24:46.966 "reconnect_delay_sec": 0, 00:24:46.966 "fast_io_fail_timeout_sec": 0, 00:24:46.966 "disable_auto_failback": false, 00:24:46.966 "generate_uuids": false, 00:24:46.966 "transport_tos": 0, 00:24:46.966 "nvme_error_stat": false, 00:24:46.966 "rdma_srq_size": 0, 00:24:46.966 "io_path_stat": false, 00:24:46.966 "allow_accel_sequence": false, 00:24:46.966 "rdma_max_cq_size": 0, 00:24:46.966 "rdma_cm_event_timeout_ms": 0, 00:24:46.966 "dhchap_digests": [ 00:24:46.966 "sha256", 00:24:46.966 "sha384", 00:24:46.966 "sha512" 00:24:46.966 ], 00:24:46.966 "dhchap_dhgroups": [ 00:24:46.966 "null", 00:24:46.966 "ffdhe2048", 00:24:46.966 "ffdhe3072", 00:24:46.966 "ffdhe4096", 00:24:46.966 "ffdhe6144", 00:24:46.966 "ffdhe8192" 00:24:46.966 ] 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "bdev_nvme_set_hotplug", 00:24:46.966 "params": { 00:24:46.966 "period_us": 100000, 00:24:46.966 "enable": false 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "bdev_malloc_create", 00:24:46.966 "params": { 00:24:46.966 "name": "malloc0", 00:24:46.966 "num_blocks": 8192, 00:24:46.966 "block_size": 4096, 00:24:46.966 "physical_block_size": 4096, 00:24:46.966 "uuid": "665f2e2c-1a84-49c5-9df3-958e4e15d3b3", 00:24:46.966 "optimal_io_boundary": 0, 00:24:46.966 "md_size": 0, 00:24:46.966 "dif_type": 0, 00:24:46.966 "dif_is_head_of_md": false, 00:24:46.966 "dif_pi_format": 0 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "bdev_wait_for_examine" 00:24:46.966 } 00:24:46.966 ] 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "subsystem": "nbd", 00:24:46.966 "config": [] 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "subsystem": "scheduler", 00:24:46.966 "config": [ 00:24:46.966 { 00:24:46.966 "method": "framework_set_scheduler", 00:24:46.966 "params": { 00:24:46.966 "name": "static" 00:24:46.966 } 00:24:46.966 } 00:24:46.966 ] 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "subsystem": "nvmf", 00:24:46.966 "config": [ 00:24:46.966 { 00:24:46.966 "method": "nvmf_set_config", 00:24:46.966 "params": { 00:24:46.966 "discovery_filter": "match_any", 00:24:46.966 "admin_cmd_passthru": { 00:24:46.966 "identify_ctrlr": false 00:24:46.966 }, 00:24:46.966 "dhchap_digests": [ 00:24:46.966 "sha256", 00:24:46.966 "sha384", 00:24:46.966 "sha512" 00:24:46.966 ], 00:24:46.966 "dhchap_dhgroups": [ 00:24:46.966 "null", 00:24:46.966 "ffdhe2048", 00:24:46.966 "ffdhe3072", 00:24:46.966 "ffdhe4096", 00:24:46.966 "ffdhe6144", 00:24:46.966 "ffdhe8192" 00:24:46.966 ] 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "nvmf_set_max_subsystems", 00:24:46.966 "params": { 00:24:46.966 "max_subsystems": 1024 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "nvmf_set_crdt", 00:24:46.966 "params": { 00:24:46.966 "crdt1": 0, 00:24:46.966 "crdt2": 0, 00:24:46.966 "crdt3": 0 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "nvmf_create_transport", 00:24:46.966 "params": { 00:24:46.966 "trtype": "TCP", 00:24:46.966 "max_queue_depth": 128, 00:24:46.966 "max_io_qpairs_per_ctrlr": 127, 00:24:46.966 "in_capsule_data_size": 4096, 00:24:46.966 "max_io_size": 131072, 00:24:46.966 "io_unit_size": 131072, 00:24:46.966 "max_aq_depth": 128, 00:24:46.966 "num_shared_buffers": 511, 00:24:46.966 "buf_cache_size": 4294967295, 00:24:46.966 "dif_insert_or_strip": false, 00:24:46.966 "zcopy": false, 00:24:46.966 "c2h_success": false, 00:24:46.966 "sock_priority": 0, 00:24:46.966 "abort_timeout_sec": 1, 00:24:46.966 "ack_timeout": 0, 00:24:46.966 "data_wr_pool_size": 0 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "nvmf_create_subsystem", 00:24:46.966 "params": { 00:24:46.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.966 "allow_any_host": false, 00:24:46.966 "serial_number": "00000000000000000000", 00:24:46.966 "model_number": "SPDK bdev Controller", 00:24:46.966 "max_namespaces": 32, 00:24:46.966 "min_cntlid": 1, 00:24:46.966 "max_cntlid": 65519, 00:24:46.966 "ana_reporting": false 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "nvmf_subsystem_add_host", 00:24:46.966 "params": { 00:24:46.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.966 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.966 "psk": "key0" 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "nvmf_subsystem_add_ns", 00:24:46.966 "params": { 00:24:46.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.966 "namespace": { 00:24:46.966 "nsid": 1, 00:24:46.966 "bdev_name": "malloc0", 00:24:46.966 "nguid": "665F2E2C1A8449C59DF3958E4E15D3B3", 00:24:46.966 "uuid": "665f2e2c-1a84-49c5-9df3-958e4e15d3b3", 00:24:46.966 "no_auto_visible": false 00:24:46.966 } 00:24:46.966 } 00:24:46.966 }, 00:24:46.966 { 00:24:46.966 "method": "nvmf_subsystem_add_listener", 00:24:46.966 "params": { 00:24:46.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.966 "listen_address": { 00:24:46.966 "trtype": "TCP", 00:24:46.966 "adrfam": "IPv4", 00:24:46.966 "traddr": "10.0.0.2", 00:24:46.966 "trsvcid": "4420" 00:24:46.966 }, 00:24:46.966 "secure_channel": false, 00:24:46.966 "sock_impl": "ssl" 00:24:46.966 } 00:24:46.966 } 00:24:46.966 ] 00:24:46.966 } 00:24:46.966 ] 00:24:46.966 }' 00:24:46.966 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3191162 00:24:46.966 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3191162 00:24:46.966 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:46.966 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3191162 ']' 00:24:46.966 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.967 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.967 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.967 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.967 15:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.228 [2024-10-01 15:42:26.437113] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:47.228 [2024-10-01 15:42:26.437172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.228 [2024-10-01 15:42:26.473817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:47.228 [2024-10-01 15:42:26.519107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.228 [2024-10-01 15:42:26.547556] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.228 [2024-10-01 15:42:26.547591] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.228 [2024-10-01 15:42:26.547597] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.229 [2024-10-01 15:42:26.547602] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.229 [2024-10-01 15:42:26.547606] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.229 [2024-10-01 15:42:26.547648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.490 [2024-10-01 15:42:26.743632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.490 [2024-10-01 15:42:26.775648] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.490 [2024-10-01 15:42:26.775854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.062 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.062 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:48.062 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:48.062 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.062 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.062 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.062 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3191243 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3191243 /var/tmp/bdevperf.sock 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3191243 ']' 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.063 15:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:48.063 "subsystems": [ 00:24:48.063 { 00:24:48.063 "subsystem": "keyring", 00:24:48.063 "config": [ 00:24:48.063 { 00:24:48.063 "method": "keyring_file_add_key", 00:24:48.063 "params": { 00:24:48.063 "name": "key0", 00:24:48.063 "path": "/tmp/tmp.ULqIJsJpZD" 00:24:48.063 } 00:24:48.063 } 00:24:48.063 ] 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "subsystem": "iobuf", 00:24:48.063 "config": [ 00:24:48.063 { 00:24:48.063 "method": "iobuf_set_options", 00:24:48.063 "params": { 00:24:48.063 "small_pool_count": 8192, 00:24:48.063 "large_pool_count": 1024, 00:24:48.063 "small_bufsize": 8192, 00:24:48.063 "large_bufsize": 135168 00:24:48.063 } 00:24:48.063 } 00:24:48.063 ] 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "subsystem": "sock", 00:24:48.063 "config": [ 00:24:48.063 { 00:24:48.063 "method": "sock_set_default_impl", 00:24:48.063 "params": { 00:24:48.063 "impl_name": "posix" 00:24:48.063 } 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "method": "sock_impl_set_options", 00:24:48.063 "params": { 00:24:48.063 "impl_name": "ssl", 00:24:48.063 "recv_buf_size": 4096, 00:24:48.063 "send_buf_size": 4096, 00:24:48.063 "enable_recv_pipe": true, 00:24:48.063 "enable_quickack": false, 00:24:48.063 "enable_placement_id": 0, 00:24:48.063 "enable_zerocopy_send_server": true, 00:24:48.063 "enable_zerocopy_send_client": false, 00:24:48.063 "zerocopy_threshold": 0, 00:24:48.063 "tls_version": 0, 00:24:48.063 "enable_ktls": false 00:24:48.063 } 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "method": "sock_impl_set_options", 00:24:48.063 "params": { 00:24:48.063 "impl_name": "posix", 00:24:48.063 "recv_buf_size": 2097152, 00:24:48.063 "send_buf_size": 2097152, 00:24:48.063 "enable_recv_pipe": true, 00:24:48.063 "enable_quickack": false, 00:24:48.063 "enable_placement_id": 0, 00:24:48.063 "enable_zerocopy_send_server": true, 00:24:48.063 "enable_zerocopy_send_client": false, 00:24:48.063 "zerocopy_threshold": 0, 00:24:48.063 "tls_version": 0, 00:24:48.063 "enable_ktls": false 00:24:48.063 } 00:24:48.063 } 00:24:48.063 ] 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "subsystem": "vmd", 00:24:48.063 "config": [] 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "subsystem": "accel", 00:24:48.063 "config": [ 00:24:48.063 { 00:24:48.063 "method": "accel_set_options", 00:24:48.063 "params": { 00:24:48.063 "small_cache_size": 128, 00:24:48.063 "large_cache_size": 16, 00:24:48.063 "task_count": 2048, 00:24:48.063 "sequence_count": 2048, 00:24:48.063 "buf_count": 2048 00:24:48.063 } 00:24:48.063 } 00:24:48.063 ] 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "subsystem": "bdev", 00:24:48.063 "config": [ 00:24:48.063 { 00:24:48.063 "method": "bdev_set_options", 00:24:48.063 "params": { 00:24:48.063 "bdev_io_pool_size": 65535, 00:24:48.063 "bdev_io_cache_size": 256, 00:24:48.063 "bdev_auto_examine": true, 00:24:48.063 "iobuf_small_cache_size": 128, 00:24:48.063 "iobuf_large_cache_size": 16 00:24:48.063 } 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "method": "bdev_raid_set_options", 00:24:48.063 "params": { 00:24:48.063 "process_window_size_kb": 1024, 00:24:48.063 "process_max_bandwidth_mb_sec": 0 00:24:48.063 } 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "method": "bdev_iscsi_set_options", 00:24:48.063 "params": { 00:24:48.063 "timeout_sec": 30 00:24:48.063 } 00:24:48.063 }, 00:24:48.063 { 00:24:48.063 "method": "bdev_nvme_set_options", 00:24:48.063 "params": { 00:24:48.063 "action_on_timeout": "none", 00:24:48.063 "timeout_us": 0, 00:24:48.063 "timeout_admin_us": 0, 00:24:48.063 "keep_alive_timeout_ms": 10000, 00:24:48.063 "arbitration_burst": 0, 00:24:48.063 "low_priority_weight": 0, 00:24:48.063 "medium_priority_weight": 0, 00:24:48.063 "high_priority_weight": 0, 00:24:48.063 "nvme_adminq_poll_period_us": 10000, 00:24:48.063 "nvme_ioq_poll_period_us": 0, 00:24:48.063 "io_queue_requests": 512, 00:24:48.063 "delay_cmd_submit": true, 00:24:48.063 "transport_retry_count": 4, 00:24:48.063 "bdev_retry_count": 3, 00:24:48.063 "transport_ack_timeout": 0, 00:24:48.063 "ctrlr_loss_timeout_sec": 0, 00:24:48.063 "reconnect_delay_sec": 0, 00:24:48.063 "fast_io_fail_timeout_sec": 0, 00:24:48.063 "disable_auto_failback": false, 00:24:48.063 "generate_uuids": false, 00:24:48.063 "transport_tos": 0, 00:24:48.063 "nvme_error_stat": false, 00:24:48.063 "rdma_srq_size": 0, 00:24:48.063 "io_path_stat": false, 00:24:48.063 "allow_accel_sequence": false, 00:24:48.063 "rdma_max_cq_size": 0, 00:24:48.063 "rdma_cm_event_timeout_ms": 0, 00:24:48.063 "dhchap_digests": [ 00:24:48.063 "sha256", 00:24:48.063 "sha384", 00:24:48.063 "sha512" 00:24:48.064 ], 00:24:48.064 "dhchap_dhgroups": [ 00:24:48.064 "null", 00:24:48.064 "ffdhe2048", 00:24:48.064 "ffdhe3072", 00:24:48.064 "ffdhe4096", 00:24:48.064 "ffdhe6144", 00:24:48.064 "ffdhe8192" 00:24:48.064 ] 00:24:48.064 } 00:24:48.064 }, 00:24:48.064 { 00:24:48.064 "method": "bdev_nvme_attach_controller", 00:24:48.064 "params": { 00:24:48.064 "name": "nvme0", 00:24:48.064 "trtype": "TCP", 00:24:48.064 "adrfam": "IPv4", 00:24:48.064 "traddr": "10.0.0.2", 00:24:48.064 "trsvcid": "4420", 00:24:48.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.064 "prchk_reftag": false, 00:24:48.064 "prchk_guard": false, 00:24:48.064 "ctrlr_loss_timeout_sec": 0, 00:24:48.064 "reconnect_delay_sec": 0, 00:24:48.064 "fast_io_fail_timeout_sec": 0, 00:24:48.064 "psk": "key0", 00:24:48.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.064 "hdgst": false, 00:24:48.064 "ddgst": false 00:24:48.064 } 00:24:48.064 }, 00:24:48.064 { 00:24:48.064 "method": "bdev_nvme_set_hotplug", 00:24:48.064 "params": { 00:24:48.064 "period_us": 100000, 00:24:48.064 "enable": false 00:24:48.064 } 00:24:48.064 }, 00:24:48.064 { 00:24:48.064 "method": "bdev_enable_histogram", 00:24:48.064 "params": { 00:24:48.064 "name": "nvme0n1", 00:24:48.064 "enable": true 00:24:48.064 } 00:24:48.064 }, 00:24:48.064 { 00:24:48.064 "method": "bdev_wait_for_examine" 00:24:48.064 } 00:24:48.064 ] 00:24:48.064 }, 00:24:48.064 { 00:24:48.064 "subsystem": "nbd", 00:24:48.064 "config": [] 00:24:48.064 } 00:24:48.064 ] 00:24:48.064 }' 00:24:48.064 [2024-10-01 15:42:27.314672] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:48.064 [2024-10-01 15:42:27.314724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191243 ] 00:24:48.064 [2024-10-01 15:42:27.345179] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:48.064 [2024-10-01 15:42:27.392702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.064 [2024-10-01 15:42:27.421331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.326 [2024-10-01 15:42:27.550994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.897 15:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.897 15:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:48.897 15:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.897 15:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:48.897 15:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.897 15:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:49.158 Running I/O for 1 seconds... 00:24:50.099 5339.00 IOPS, 20.86 MiB/s 00:24:50.099 Latency(us) 00:24:50.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.099 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:50.099 Verification LBA range: start 0x0 length 0x2000 00:24:50.099 nvme0n1 : 1.03 5322.95 20.79 0.00 0.00 23827.44 4560.21 26542.08 00:24:50.099 =================================================================================================================== 00:24:50.099 Total : 5322.95 20.79 0.00 0.00 23827.44 4560.21 26542.08 00:24:50.099 { 00:24:50.099 "results": [ 00:24:50.099 { 00:24:50.099 "job": "nvme0n1", 00:24:50.099 "core_mask": "0x2", 00:24:50.099 "workload": "verify", 00:24:50.099 "status": "finished", 00:24:50.099 "verify_range": { 00:24:50.099 "start": 0, 00:24:50.099 "length": 8192 00:24:50.099 }, 00:24:50.099 "queue_depth": 128, 00:24:50.099 "io_size": 4096, 00:24:50.099 "runtime": 1.027063, 00:24:50.099 "iops": 5322.9451357901125, 00:24:50.099 "mibps": 20.792754436680127, 00:24:50.099 "io_failed": 0, 00:24:50.099 "io_timeout": 0, 00:24:50.099 "avg_latency_us": 23827.44170233522, 00:24:50.099 "min_latency_us": 4560.213333333333, 00:24:50.099 "max_latency_us": 26542.08 00:24:50.099 } 00:24:50.099 ], 00:24:50.099 "core_count": 1 00:24:50.099 } 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:50.099 nvmf_trace.0 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3191243 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3191243 ']' 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3191243 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.099 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3191243 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3191243' 00:24:50.362 killing process with pid 3191243 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3191243 00:24:50.362 Received shutdown signal, test time was about 1.000000 seconds 00:24:50.362 00:24:50.362 Latency(us) 00:24:50.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.362 =================================================================================================================== 00:24:50.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3191243 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.362 rmmod nvme_tcp 00:24:50.362 rmmod nvme_fabrics 00:24:50.362 rmmod nvme_keyring 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 3191162 ']' 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 3191162 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3191162 ']' 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3191162 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.362 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3191162 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3191162' 00:24:50.623 killing process with pid 3191162 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3191162 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3191162 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.623 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WomKZ6RnQp /tmp/tmp.J2Zpb2Eto1 /tmp/tmp.ULqIJsJpZD 00:24:53.169 00:24:53.169 real 1m27.082s 00:24:53.169 user 2m15.714s 00:24:53.169 sys 0m27.810s 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 ************************************ 00:24:53.169 END TEST nvmf_tls 00:24:53.169 ************************************ 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 ************************************ 00:24:53.169 START TEST nvmf_fips 00:24:53.169 ************************************ 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:53.169 * Looking for test storage... 00:24:53.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:53.169 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.170 --rc genhtml_branch_coverage=1 00:24:53.170 --rc genhtml_function_coverage=1 00:24:53.170 --rc genhtml_legend=1 00:24:53.170 --rc geninfo_all_blocks=1 00:24:53.170 --rc geninfo_unexecuted_blocks=1 00:24:53.170 00:24:53.170 ' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.170 --rc genhtml_branch_coverage=1 00:24:53.170 --rc genhtml_function_coverage=1 00:24:53.170 --rc genhtml_legend=1 00:24:53.170 --rc geninfo_all_blocks=1 00:24:53.170 --rc geninfo_unexecuted_blocks=1 00:24:53.170 00:24:53.170 ' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.170 --rc genhtml_branch_coverage=1 00:24:53.170 --rc genhtml_function_coverage=1 00:24:53.170 --rc genhtml_legend=1 00:24:53.170 --rc geninfo_all_blocks=1 00:24:53.170 --rc geninfo_unexecuted_blocks=1 00:24:53.170 00:24:53.170 ' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.170 --rc genhtml_branch_coverage=1 00:24:53.170 --rc genhtml_function_coverage=1 00:24:53.170 --rc genhtml_legend=1 00:24:53.170 --rc geninfo_all_blocks=1 00:24:53.170 --rc geninfo_unexecuted_blocks=1 00:24:53.170 00:24:53.170 ' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.170 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:53.171 Error setting digest 00:24:53.171 40021231717F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:53.171 40021231717F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.171 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.345 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:01.346 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:01.346 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:01.346 Found net devices under 0000:31:00.0: cvl_0_0 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:01.346 Found net devices under 0000:31:00.1: cvl_0_1 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.346 15:42:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:25:01.346 00:25:01.346 --- 10.0.0.2 ping statistics --- 00:25:01.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.346 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:25:01.346 00:25:01.346 --- 10.0.0.1 ping statistics --- 00:25:01.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.346 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=3196208 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 3196208 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3196208 ']' 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.346 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.347 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.347 15:42:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:01.347 [2024-10-01 15:42:40.311342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:25:01.347 [2024-10-01 15:42:40.311425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.347 [2024-10-01 15:42:40.353148] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:01.347 [2024-10-01 15:42:40.401688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.347 [2024-10-01 15:42:40.447348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.347 [2024-10-01 15:42:40.447403] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.347 [2024-10-01 15:42:40.447414] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.347 [2024-10-01 15:42:40.447423] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.347 [2024-10-01 15:42:40.447431] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.347 [2024-10-01 15:42:40.447459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.oUT 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.oUT 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.oUT 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.oUT 00:25:01.920 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:01.920 [2024-10-01 15:42:41.326771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.920 [2024-10-01 15:42:41.342771] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.920 [2024-10-01 15:42:41.343138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.181 malloc0 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3196323 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3196323 /var/tmp/bdevperf.sock 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3196323 ']' 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.181 15:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.181 [2024-10-01 15:42:41.489960] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:25:02.181 [2024-10-01 15:42:41.490035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196323 ] 00:25:02.181 [2024-10-01 15:42:41.525740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:02.181 [2024-10-01 15:42:41.573815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.181 [2024-10-01 15:42:41.622789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.126 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.126 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:03.126 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.oUT 00:25:03.126 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:03.386 [2024-10-01 15:42:42.630525] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.386 TLSTESTn1 00:25:03.386 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.386 Running I/O for 10 seconds... 00:25:13.751 5801.00 IOPS, 22.66 MiB/s 5867.00 IOPS, 22.92 MiB/s 5797.00 IOPS, 22.64 MiB/s 5860.25 IOPS, 22.89 MiB/s 5986.00 IOPS, 23.38 MiB/s 6051.83 IOPS, 23.64 MiB/s 5994.00 IOPS, 23.41 MiB/s 5924.38 IOPS, 23.14 MiB/s 5945.44 IOPS, 23.22 MiB/s 5945.80 IOPS, 23.23 MiB/s 00:25:13.751 Latency(us) 00:25:13.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:13.751 Verification LBA range: start 0x0 length 0x2000 00:25:13.751 TLSTESTn1 : 10.01 5949.71 23.24 0.00 0.00 21480.88 5352.11 31238.83 00:25:13.751 =================================================================================================================== 00:25:13.751 Total : 5949.71 23.24 0.00 0.00 21480.88 5352.11 31238.83 00:25:13.751 { 00:25:13.751 "results": [ 00:25:13.751 { 00:25:13.751 "job": "TLSTESTn1", 00:25:13.751 "core_mask": "0x4", 00:25:13.751 "workload": "verify", 00:25:13.751 "status": "finished", 00:25:13.751 "verify_range": { 00:25:13.751 "start": 0, 00:25:13.751 "length": 8192 00:25:13.751 }, 00:25:13.751 "queue_depth": 128, 00:25:13.751 "io_size": 4096, 00:25:13.751 "runtime": 10.014599, 00:25:13.751 "iops": 5949.714012513132, 00:25:13.751 "mibps": 23.24107036137942, 00:25:13.751 "io_failed": 0, 00:25:13.751 "io_timeout": 0, 00:25:13.751 "avg_latency_us": 21480.880257787325, 00:25:13.751 "min_latency_us": 5352.106666666667, 00:25:13.751 "max_latency_us": 31238.826666666668 00:25:13.751 } 00:25:13.751 ], 00:25:13.751 "core_count": 1 00:25:13.751 } 00:25:13.751 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:13.751 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:13.751 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:13.751 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:13.751 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:13.751 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:13.751 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:13.752 nvmf_trace.0 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3196323 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3196323 ']' 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3196323 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.752 15:42:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3196323 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3196323' 00:25:13.752 killing process with pid 3196323 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3196323 00:25:13.752 Received shutdown signal, test time was about 10.000000 seconds 00:25:13.752 00:25:13.752 Latency(us) 00:25:13.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.752 =================================================================================================================== 00:25:13.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3196323 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.752 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.752 rmmod nvme_tcp 00:25:13.752 rmmod nvme_fabrics 00:25:13.752 rmmod nvme_keyring 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 3196208 ']' 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 3196208 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3196208 ']' 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3196208 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3196208 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3196208' 00:25:14.012 killing process with pid 3196208 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3196208 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3196208 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:14.012 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:25:14.013 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:14.013 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:25:14.013 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.013 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:14.013 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.013 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.013 15:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.oUT 00:25:16.556 00:25:16.556 real 0m23.387s 00:25:16.556 user 0m25.014s 00:25:16.556 sys 0m9.651s 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:16.556 ************************************ 00:25:16.556 END TEST nvmf_fips 00:25:16.556 ************************************ 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:16.556 ************************************ 00:25:16.556 START TEST nvmf_control_msg_list 00:25:16.556 ************************************ 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:16.556 * Looking for test storage... 00:25:16.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:16.556 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:16.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.557 --rc genhtml_branch_coverage=1 00:25:16.557 --rc genhtml_function_coverage=1 00:25:16.557 --rc genhtml_legend=1 00:25:16.557 --rc geninfo_all_blocks=1 00:25:16.557 --rc geninfo_unexecuted_blocks=1 00:25:16.557 00:25:16.557 ' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:16.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.557 --rc genhtml_branch_coverage=1 00:25:16.557 --rc genhtml_function_coverage=1 00:25:16.557 --rc genhtml_legend=1 00:25:16.557 --rc geninfo_all_blocks=1 00:25:16.557 --rc geninfo_unexecuted_blocks=1 00:25:16.557 00:25:16.557 ' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:16.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.557 --rc genhtml_branch_coverage=1 00:25:16.557 --rc genhtml_function_coverage=1 00:25:16.557 --rc genhtml_legend=1 00:25:16.557 --rc geninfo_all_blocks=1 00:25:16.557 --rc geninfo_unexecuted_blocks=1 00:25:16.557 00:25:16.557 ' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:16.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.557 --rc genhtml_branch_coverage=1 00:25:16.557 --rc genhtml_function_coverage=1 00:25:16.557 --rc genhtml_legend=1 00:25:16.557 --rc geninfo_all_blocks=1 00:25:16.557 --rc geninfo_unexecuted_blocks=1 00:25:16.557 00:25:16.557 ' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:16.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:16.557 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:16.558 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.558 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.558 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.558 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:16.558 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:16.558 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.558 15:42:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:24.698 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:24.699 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:24.699 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:24.699 Found net devices under 0000:31:00.0: cvl_0_0 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:24.699 Found net devices under 0000:31:00.1: cvl_0_1 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.699 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:24.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:25:24.700 00:25:24.700 --- 10.0.0.2 ping statistics --- 00:25:24.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.700 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:25:24.700 00:25:24.700 --- 10.0.0.1 ping statistics --- 00:25:24.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.700 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=3202999 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 3202999 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3202999 ']' 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.700 15:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.700 [2024-10-01 15:43:03.590220] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:25:24.700 [2024-10-01 15:43:03.590291] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.700 [2024-10-01 15:43:03.631498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.700 [2024-10-01 15:43:03.679673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.700 [2024-10-01 15:43:03.725435] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.700 [2024-10-01 15:43:03.725487] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.700 [2024-10-01 15:43:03.725495] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.700 [2024-10-01 15:43:03.725502] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.700 [2024-10-01 15:43:03.725508] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.700 [2024-10-01 15:43:03.725531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.961 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.961 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:24.961 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:24.961 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.961 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.223 [2024-10-01 15:43:04.456967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.223 Malloc0 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.223 [2024-10-01 15:43:04.521760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3203073 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3203074 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3203075 00:25:25.223 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3203073 00:25:25.224 15:43:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.224 [2024-10-01 15:43:04.612658] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:25.224 [2024-10-01 15:43:04.612930] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:25.224 [2024-10-01 15:43:04.613285] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:26.610 Initializing NVMe Controllers 00:25:26.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:26.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:26.610 Initialization complete. Launching workers. 00:25:26.610 ======================================================== 00:25:26.610 Latency(us) 00:25:26.611 Device Information : IOPS MiB/s Average min max 00:25:26.611 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40929.47 40805.53 41524.76 00:25:26.611 ======================================================== 00:25:26.611 Total : 25.00 0.10 40929.47 40805.53 41524.76 00:25:26.611 00:25:26.611 Initializing NVMe Controllers 00:25:26.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:26.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:26.611 Initialization complete. Launching workers. 00:25:26.611 ======================================================== 00:25:26.611 Latency(us) 00:25:26.611 Device Information : IOPS MiB/s Average min max 00:25:26.611 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1411.00 5.51 708.55 225.71 901.55 00:25:26.611 ======================================================== 00:25:26.611 Total : 1411.00 5.51 708.55 225.71 901.55 00:25:26.611 00:25:26.611 Initializing NVMe Controllers 00:25:26.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:26.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:26.611 Initialization complete. Launching workers. 00:25:26.611 ======================================================== 00:25:26.611 Latency(us) 00:25:26.611 Device Information : IOPS MiB/s Average min max 00:25:26.611 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1391.00 5.43 718.95 298.51 933.54 00:25:26.611 ======================================================== 00:25:26.611 Total : 1391.00 5.43 718.95 298.51 933.54 00:25:26.611 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3203074 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3203075 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.611 rmmod nvme_tcp 00:25:26.611 rmmod nvme_fabrics 00:25:26.611 rmmod nvme_keyring 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 3202999 ']' 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 3202999 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3202999 ']' 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3202999 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3202999 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3202999' 00:25:26.611 killing process with pid 3202999 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3202999 00:25:26.611 15:43:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3202999 00:25:26.611 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:26.611 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:26.611 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:26.611 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:26.611 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:25:26.611 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:26.611 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:25:26.871 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.871 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.871 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.871 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.871 15:43:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.785 00:25:28.785 real 0m12.556s 00:25:28.785 user 0m7.804s 00:25:28.785 sys 0m6.763s 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.785 ************************************ 00:25:28.785 END TEST nvmf_control_msg_list 00:25:28.785 ************************************ 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:28.785 ************************************ 00:25:28.785 START TEST nvmf_wait_for_buf 00:25:28.785 ************************************ 00:25:28.785 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:29.047 * Looking for test storage... 00:25:29.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.047 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:29.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.047 --rc genhtml_branch_coverage=1 00:25:29.047 --rc genhtml_function_coverage=1 00:25:29.047 --rc genhtml_legend=1 00:25:29.047 --rc geninfo_all_blocks=1 00:25:29.047 --rc geninfo_unexecuted_blocks=1 00:25:29.048 00:25:29.048 ' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.048 --rc genhtml_branch_coverage=1 00:25:29.048 --rc genhtml_function_coverage=1 00:25:29.048 --rc genhtml_legend=1 00:25:29.048 --rc geninfo_all_blocks=1 00:25:29.048 --rc geninfo_unexecuted_blocks=1 00:25:29.048 00:25:29.048 ' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.048 --rc genhtml_branch_coverage=1 00:25:29.048 --rc genhtml_function_coverage=1 00:25:29.048 --rc genhtml_legend=1 00:25:29.048 --rc geninfo_all_blocks=1 00:25:29.048 --rc geninfo_unexecuted_blocks=1 00:25:29.048 00:25:29.048 ' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.048 --rc genhtml_branch_coverage=1 00:25:29.048 --rc genhtml_function_coverage=1 00:25:29.048 --rc genhtml_legend=1 00:25:29.048 --rc geninfo_all_blocks=1 00:25:29.048 --rc geninfo_unexecuted_blocks=1 00:25:29.048 00:25:29.048 ' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:29.048 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:29.049 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:29.049 15:43:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:37.199 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:37.199 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:37.199 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:37.200 Found net devices under 0000:31:00.0: cvl_0_0 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:37.200 Found net devices under 0000:31:00.1: cvl_0_1 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:25:37.200 00:25:37.200 --- 10.0.0.2 ping statistics --- 00:25:37.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.200 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:25:37.200 00:25:37.200 --- 10.0.0.1 ping statistics --- 00:25:37.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.200 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=3207705 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 3207705 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3207705 ']' 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.200 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.200 [2024-10-01 15:43:16.282527] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:25:37.200 [2024-10-01 15:43:16.282598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.200 [2024-10-01 15:43:16.324160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:37.200 [2024-10-01 15:43:16.374195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.200 [2024-10-01 15:43:16.420828] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.200 [2024-10-01 15:43:16.420881] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.200 [2024-10-01 15:43:16.420890] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.200 [2024-10-01 15:43:16.420904] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.200 [2024-10-01 15:43:16.420910] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.200 [2024-10-01 15:43:16.420939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.772 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.034 Malloc0 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.034 [2024-10-01 15:43:17.239035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.034 [2024-10-01 15:43:17.275345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.034 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.034 [2024-10-01 15:43:17.357998] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:39.421 Initializing NVMe Controllers 00:25:39.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:39.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:39.421 Initialization complete. Launching workers. 00:25:39.421 ======================================================== 00:25:39.421 Latency(us) 00:25:39.421 Device Information : IOPS MiB/s Average min max 00:25:39.421 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.68 8004.35 63853.79 00:25:39.421 ======================================================== 00:25:39.421 Total : 129.00 16.12 32294.68 8004.35 63853.79 00:25:39.421 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.421 rmmod nvme_tcp 00:25:39.421 rmmod nvme_fabrics 00:25:39.421 rmmod nvme_keyring 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 3207705 ']' 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 3207705 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3207705 ']' 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3207705 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.421 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3207705 00:25:39.690 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:39.690 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:39.690 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3207705' 00:25:39.690 killing process with pid 3207705 00:25:39.690 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3207705 00:25:39.690 15:43:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3207705 00:25:39.690 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:39.690 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:39.690 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:39.690 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:39.690 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:25:39.690 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:39.691 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:25:39.691 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.691 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:39.691 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.691 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.691 15:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.232 00:25:42.232 real 0m12.912s 00:25:42.232 user 0m5.164s 00:25:42.232 sys 0m6.308s 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 ************************************ 00:25:42.232 END TEST nvmf_wait_for_buf 00:25:42.232 ************************************ 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 ************************************ 00:25:42.232 START TEST nvmf_fuzz 00:25:42.232 ************************************ 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.232 * Looking for test storage... 00:25:42.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:42.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.232 --rc genhtml_branch_coverage=1 00:25:42.232 --rc genhtml_function_coverage=1 00:25:42.232 --rc genhtml_legend=1 00:25:42.232 --rc geninfo_all_blocks=1 00:25:42.232 --rc geninfo_unexecuted_blocks=1 00:25:42.232 00:25:42.232 ' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:42.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.232 --rc genhtml_branch_coverage=1 00:25:42.232 --rc genhtml_function_coverage=1 00:25:42.232 --rc genhtml_legend=1 00:25:42.232 --rc geninfo_all_blocks=1 00:25:42.232 --rc geninfo_unexecuted_blocks=1 00:25:42.232 00:25:42.232 ' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:42.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.232 --rc genhtml_branch_coverage=1 00:25:42.232 --rc genhtml_function_coverage=1 00:25:42.232 --rc genhtml_legend=1 00:25:42.232 --rc geninfo_all_blocks=1 00:25:42.232 --rc geninfo_unexecuted_blocks=1 00:25:42.232 00:25:42.232 ' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:42.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.232 --rc genhtml_branch_coverage=1 00:25:42.232 --rc genhtml_function_coverage=1 00:25:42.232 --rc genhtml_legend=1 00:25:42.232 --rc geninfo_all_blocks=1 00:25:42.232 --rc geninfo_unexecuted_blocks=1 00:25:42.232 00:25:42.232 ' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.232 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.233 15:43:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:50.369 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:50.369 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:50.369 Found net devices under 0000:31:00.0: cvl_0_0 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:50.369 Found net devices under 0000:31:00.1: cvl_0_1 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.369 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.370 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:25:50.370 00:25:50.370 --- 10.0.0.2 ping statistics --- 00:25:50.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.370 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:25:50.370 00:25:50.370 --- 10.0.0.1 ping statistics --- 00:25:50.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.370 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3212546 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3212546 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3212546 ']' 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.370 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.630 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.891 Malloc0 00:25:50.891 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.891 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.891 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.891 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.891 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.891 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:50.892 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:23.146 Fuzzing completed. Shutting down the fuzz application 00:26:23.146 00:26:23.146 Dumping successful admin opcodes: 00:26:23.146 8, 9, 10, 24, 00:26:23.146 Dumping successful io opcodes: 00:26:23.146 0, 9, 00:26:23.146 NS: 0x200003aeff00 I/O qp, Total commands completed: 1168878, total successful commands: 6879, random_seed: 2801742272 00:26:23.146 NS: 0x200003aeff00 admin qp, Total commands completed: 150015, total successful commands: 1205, random_seed: 1015812480 00:26:23.146 15:44:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:23.146 Fuzzing completed. Shutting down the fuzz application 00:26:23.146 00:26:23.146 Dumping successful admin opcodes: 00:26:23.146 24, 00:26:23.146 Dumping successful io opcodes: 00:26:23.146 00:26:23.146 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 760628813 00:26:23.146 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 760705683 00:26:23.146 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.146 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.146 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.146 rmmod nvme_tcp 00:26:23.146 rmmod nvme_fabrics 00:26:23.146 rmmod nvme_keyring 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 3212546 ']' 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 3212546 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3212546 ']' 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3212546 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3212546 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3212546' 00:26:23.146 killing process with pid 3212546 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3212546 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3212546 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.146 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:25.056 00:26:25.056 real 0m43.182s 00:26:25.056 user 0m57.133s 00:26:25.056 sys 0m15.622s 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:25.056 ************************************ 00:26:25.056 END TEST nvmf_fuzz 00:26:25.056 ************************************ 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:25.056 ************************************ 00:26:25.056 START TEST nvmf_multiconnection 00:26:25.056 ************************************ 00:26:25.056 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:25.318 * Looking for test storage... 00:26:25.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:25.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.318 --rc genhtml_branch_coverage=1 00:26:25.318 --rc genhtml_function_coverage=1 00:26:25.318 --rc genhtml_legend=1 00:26:25.318 --rc geninfo_all_blocks=1 00:26:25.318 --rc geninfo_unexecuted_blocks=1 00:26:25.318 00:26:25.318 ' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:25.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.318 --rc genhtml_branch_coverage=1 00:26:25.318 --rc genhtml_function_coverage=1 00:26:25.318 --rc genhtml_legend=1 00:26:25.318 --rc geninfo_all_blocks=1 00:26:25.318 --rc geninfo_unexecuted_blocks=1 00:26:25.318 00:26:25.318 ' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:25.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.318 --rc genhtml_branch_coverage=1 00:26:25.318 --rc genhtml_function_coverage=1 00:26:25.318 --rc genhtml_legend=1 00:26:25.318 --rc geninfo_all_blocks=1 00:26:25.318 --rc geninfo_unexecuted_blocks=1 00:26:25.318 00:26:25.318 ' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:25.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.318 --rc genhtml_branch_coverage=1 00:26:25.318 --rc genhtml_function_coverage=1 00:26:25.318 --rc genhtml_legend=1 00:26:25.318 --rc geninfo_all_blocks=1 00:26:25.318 --rc geninfo_unexecuted_blocks=1 00:26:25.318 00:26:25.318 ' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.318 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.319 15:44:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:33.466 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:33.466 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:33.466 Found net devices under 0000:31:00.0: cvl_0_0 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:33.466 Found net devices under 0000:31:00.1: cvl_0_1 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.466 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:33.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:26:33.467 00:26:33.467 --- 10.0.0.2 ping statistics --- 00:26:33.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.467 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:26:33.467 00:26:33.467 --- 10.0.0.1 ping statistics --- 00:26:33.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.467 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=3222962 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 3222962 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3222962 ']' 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:33.467 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.467 [2024-10-01 15:44:12.485794] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:26:33.467 [2024-10-01 15:44:12.485864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.467 [2024-10-01 15:44:12.527840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:33.467 [2024-10-01 15:44:12.578232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.467 [2024-10-01 15:44:12.626957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.467 [2024-10-01 15:44:12.627008] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.467 [2024-10-01 15:44:12.627016] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.467 [2024-10-01 15:44:12.627023] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.467 [2024-10-01 15:44:12.627030] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.467 [2024-10-01 15:44:12.627192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.467 [2024-10-01 15:44:12.627338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.467 [2024-10-01 15:44:12.627460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.467 [2024-10-01 15:44:12.627461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 [2024-10-01 15:44:13.366778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 Malloc1 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 [2024-10-01 15:44:13.440679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 Malloc2 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.070 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 Malloc3 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 Malloc4 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 Malloc5 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 Malloc6 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.332 Malloc7 00:26:34.332 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.333 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:34.333 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.333 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.594 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 Malloc8 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 Malloc9 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 Malloc10 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 Malloc11 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.856 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:36.241 15:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:36.241 15:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:36.241 15:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.241 15:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:36.241 15:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:38.151 15:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:40.062 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:40.062 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:40.062 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.062 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:40.062 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.975 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:43.359 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:43.359 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:43.359 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.359 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:43.359 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:45.904 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:45.905 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:45.905 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:45.905 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:45.905 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:45.905 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:45.905 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.905 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:47.286 15:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:47.286 15:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:47.286 15:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.286 15:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:47.286 15:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.199 15:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:51.106 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:51.106 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:51.106 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:51.106 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:51.106 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:53.019 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:53.019 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:53.019 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:53.019 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:53.019 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:53.019 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:53.019 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.020 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:54.934 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:54.934 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:54.934 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:54.934 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:54.934 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.846 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:58.230 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:58.230 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:58.230 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:58.230 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:58.230 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:00.142 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:00.142 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:00.143 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:00.143 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:00.143 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:00.143 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:00.143 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.143 15:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:02.056 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:02.056 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:02.056 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:02.056 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:02.056 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.967 15:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:05.878 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:05.878 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:05.878 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:05.878 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:05.878 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:07.788 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:08.049 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:08.049 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:27:08.050 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:08.050 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:08.050 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:08.050 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.050 15:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:09.959 15:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:09.959 15:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:09.959 15:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:09.959 15:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:09.959 15:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:11.868 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:11.868 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:11.868 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:11.868 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:11.868 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:11.868 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:11.869 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:11.869 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:13.779 15:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:13.779 15:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:13.779 15:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:13.779 15:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:13.779 15:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:15.693 15:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:15.694 15:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:15.694 15:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:15.694 15:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:15.694 15:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:15.694 15:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:15.694 15:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:15.694 [global] 00:27:15.694 thread=1 00:27:15.694 invalidate=1 00:27:15.694 rw=read 00:27:15.694 time_based=1 00:27:15.694 runtime=10 00:27:15.694 ioengine=libaio 00:27:15.694 direct=1 00:27:15.694 bs=262144 00:27:15.694 iodepth=64 00:27:15.694 norandommap=1 00:27:15.694 numjobs=1 00:27:15.694 00:27:15.694 [job0] 00:27:15.694 filename=/dev/nvme0n1 00:27:15.694 [job1] 00:27:15.694 filename=/dev/nvme10n1 00:27:15.956 [job2] 00:27:15.956 filename=/dev/nvme1n1 00:27:15.956 [job3] 00:27:15.956 filename=/dev/nvme2n1 00:27:15.956 [job4] 00:27:15.956 filename=/dev/nvme3n1 00:27:15.956 [job5] 00:27:15.956 filename=/dev/nvme4n1 00:27:15.956 [job6] 00:27:15.956 filename=/dev/nvme5n1 00:27:15.956 [job7] 00:27:15.956 filename=/dev/nvme6n1 00:27:15.956 [job8] 00:27:15.956 filename=/dev/nvme7n1 00:27:15.956 [job9] 00:27:15.956 filename=/dev/nvme8n1 00:27:15.956 [job10] 00:27:15.956 filename=/dev/nvme9n1 00:27:15.956 Could not set queue depth (nvme0n1) 00:27:15.956 Could not set queue depth (nvme10n1) 00:27:15.956 Could not set queue depth (nvme1n1) 00:27:15.956 Could not set queue depth (nvme2n1) 00:27:15.956 Could not set queue depth (nvme3n1) 00:27:15.956 Could not set queue depth (nvme4n1) 00:27:15.956 Could not set queue depth (nvme5n1) 00:27:15.956 Could not set queue depth (nvme6n1) 00:27:15.956 Could not set queue depth (nvme7n1) 00:27:15.956 Could not set queue depth (nvme8n1) 00:27:15.956 Could not set queue depth (nvme9n1) 00:27:16.531 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.531 fio-3.35 00:27:16.531 Starting 11 threads 00:27:28.757 00:27:28.757 job0: (groupid=0, jobs=1): err= 0: pid=3231577: Tue Oct 1 15:45:06 2024 00:27:28.757 read: IOPS=207, BW=51.8MiB/s (54.3MB/s)(521MiB/10065msec) 00:27:28.757 slat (usec): min=11, max=423658, avg=3789.74, stdev=16530.43 00:27:28.757 clat (msec): min=22, max=676, avg=304.86, stdev=147.11 00:27:28.757 lat (msec): min=23, max=912, avg=308.65, stdev=148.12 00:27:28.757 clat percentiles (msec): 00:27:28.757 | 1.00th=[ 75], 5.00th=[ 104], 10.00th=[ 126], 20.00th=[ 171], 00:27:28.757 | 30.00th=[ 218], 40.00th=[ 247], 50.00th=[ 271], 60.00th=[ 321], 00:27:28.757 | 70.00th=[ 363], 80.00th=[ 443], 90.00th=[ 523], 95.00th=[ 609], 00:27:28.757 | 99.00th=[ 667], 99.50th=[ 667], 99.90th=[ 676], 99.95th=[ 676], 00:27:28.757 | 99.99th=[ 676] 00:27:28.757 bw ( KiB/s): min= 9216, max=131072, per=6.09%, avg=51737.60, stdev=27028.49, samples=20 00:27:28.757 iops : min= 36, max= 512, avg=202.10, stdev=105.58, samples=20 00:27:28.757 lat (msec) : 50=0.10%, 100=3.69%, 250=39.28%, 500=45.18%, 750=11.75% 00:27:28.757 cpu : usr=0.04%, sys=0.78%, ctx=356, majf=0, minf=4097 00:27:28.757 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:28.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.757 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.757 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.757 job1: (groupid=0, jobs=1): err= 0: pid=3231593: Tue Oct 1 15:45:06 2024 00:27:28.757 read: IOPS=242, BW=60.7MiB/s (63.6MB/s)(615MiB/10128msec) 00:27:28.757 slat (usec): min=13, max=261111, avg=2780.57, stdev=13036.02 00:27:28.757 clat (msec): min=8, max=783, avg=260.38, stdev=159.75 00:27:28.757 lat (msec): min=8, max=783, avg=263.16, stdev=161.51 00:27:28.757 clat percentiles (msec): 00:27:28.757 | 1.00th=[ 22], 5.00th=[ 40], 10.00th=[ 57], 20.00th=[ 107], 00:27:28.757 | 30.00th=[ 169], 40.00th=[ 209], 50.00th=[ 243], 60.00th=[ 279], 00:27:28.757 | 70.00th=[ 326], 80.00th=[ 405], 90.00th=[ 485], 95.00th=[ 550], 00:27:28.757 | 99.00th=[ 684], 99.50th=[ 735], 99.90th=[ 785], 99.95th=[ 785], 00:27:28.757 | 99.99th=[ 785] 00:27:28.757 bw ( KiB/s): min=24576, max=110592, per=7.22%, avg=61337.60, stdev=27855.12, samples=20 00:27:28.757 iops : min= 96, max= 432, avg=239.60, stdev=108.81, samples=20 00:27:28.757 lat (msec) : 10=0.08%, 20=0.57%, 50=7.73%, 100=10.98%, 250=33.35% 00:27:28.757 lat (msec) : 500=38.39%, 750=8.58%, 1000=0.33% 00:27:28.758 cpu : usr=0.08%, sys=0.99%, ctx=531, majf=0, minf=4097 00:27:28.758 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:27:28.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.758 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.758 job2: (groupid=0, jobs=1): err= 0: pid=3231615: Tue Oct 1 15:45:06 2024 00:27:28.758 read: IOPS=474, BW=119MiB/s (124MB/s)(1196MiB/10070msec) 00:27:28.758 slat (usec): min=10, max=144687, avg=2059.45, stdev=8805.37 00:27:28.758 clat (msec): min=28, max=671, avg=132.57, stdev=129.91 00:27:28.758 lat (msec): min=30, max=671, avg=134.63, stdev=131.81 00:27:28.758 clat percentiles (msec): 00:27:28.758 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 45], 00:27:28.758 | 30.00th=[ 49], 40.00th=[ 61], 50.00th=[ 74], 60.00th=[ 95], 00:27:28.758 | 70.00th=[ 128], 80.00th=[ 205], 90.00th=[ 334], 95.00th=[ 430], 00:27:28.758 | 99.00th=[ 575], 99.50th=[ 617], 99.90th=[ 676], 99.95th=[ 676], 00:27:28.758 | 99.99th=[ 676] 00:27:28.758 bw ( KiB/s): min=21504, max=371200, per=14.23%, avg=120780.80, stdev=105348.43, samples=20 00:27:28.758 iops : min= 84, max= 1450, avg=471.80, stdev=411.52, samples=20 00:27:28.758 lat (msec) : 50=33.19%, 100=29.05%, 250=21.27%, 500=14.22%, 750=2.28% 00:27:28.758 cpu : usr=0.15%, sys=1.78%, ctx=809, majf=0, minf=4097 00:27:28.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:28.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.758 issued rwts: total=4782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.758 job3: (groupid=0, jobs=1): err= 0: pid=3231626: Tue Oct 1 15:45:06 2024 00:27:28.758 read: IOPS=136, BW=34.1MiB/s (35.8MB/s)(347MiB/10169msec) 00:27:28.758 slat (usec): min=12, max=455018, avg=6550.51, stdev=25077.13 00:27:28.758 clat (msec): min=27, max=948, avg=462.19, stdev=193.93 00:27:28.758 lat (msec): min=27, max=1121, avg=468.75, stdev=195.99 00:27:28.758 clat percentiles (msec): 00:27:28.758 | 1.00th=[ 42], 5.00th=[ 218], 10.00th=[ 241], 20.00th=[ 271], 00:27:28.758 | 30.00th=[ 317], 40.00th=[ 405], 50.00th=[ 443], 60.00th=[ 518], 00:27:28.758 | 70.00th=[ 575], 80.00th=[ 634], 90.00th=[ 709], 95.00th=[ 835], 00:27:28.758 | 99.00th=[ 911], 99.50th=[ 919], 99.90th=[ 953], 99.95th=[ 953], 00:27:28.758 | 99.99th=[ 953] 00:27:28.758 bw ( KiB/s): min=11776, max=66048, per=3.99%, avg=33873.80, stdev=15050.14, samples=20 00:27:28.758 iops : min= 46, max= 258, avg=132.30, stdev=58.77, samples=20 00:27:28.758 lat (msec) : 50=1.23%, 100=1.30%, 250=11.97%, 500=43.48%, 750=34.53% 00:27:28.758 lat (msec) : 1000=7.50% 00:27:28.758 cpu : usr=0.02%, sys=0.59%, ctx=224, majf=0, minf=4097 00:27:28.758 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:27:28.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.758 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.758 issued rwts: total=1387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.758 job4: (groupid=0, jobs=1): err= 0: pid=3231633: Tue Oct 1 15:45:06 2024 00:27:28.758 read: IOPS=233, BW=58.5MiB/s (61.3MB/s)(588MiB/10050msec) 00:27:28.758 slat (usec): min=12, max=238163, avg=2700.12, stdev=15022.87 00:27:28.758 clat (msec): min=2, max=927, avg=270.45, stdev=246.76 00:27:28.758 lat (msec): min=2, max=945, avg=273.15, stdev=249.73 00:27:28.758 clat percentiles (msec): 00:27:28.758 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 33], 20.00th=[ 56], 00:27:28.758 | 30.00th=[ 67], 40.00th=[ 85], 50.00th=[ 176], 60.00th=[ 300], 00:27:28.758 | 70.00th=[ 414], 80.00th=[ 523], 90.00th=[ 642], 95.00th=[ 735], 00:27:28.758 | 99.00th=[ 844], 99.50th=[ 911], 99.90th=[ 927], 99.95th=[ 927], 00:27:28.758 | 99.99th=[ 927] 00:27:28.758 bw ( KiB/s): min= 9216, max=263168, per=6.90%, avg=58572.80, stdev=62438.46, samples=20 00:27:28.758 iops : min= 36, max= 1028, avg=228.80, stdev=243.90, samples=20 00:27:28.758 lat (msec) : 4=0.26%, 10=5.53%, 20=2.25%, 50=8.80%, 100=28.03% 00:27:28.758 lat (msec) : 250=11.48%, 500=22.08%, 750=17.44%, 1000=4.13% 00:27:28.758 cpu : usr=0.07%, sys=0.96%, ctx=717, majf=0, minf=4097 00:27:28.758 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:27:28.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.758 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.758 job5: (groupid=0, jobs=1): err= 0: pid=3231658: Tue Oct 1 15:45:06 2024 00:27:28.758 read: IOPS=203, BW=50.9MiB/s (53.4MB/s)(516MiB/10125msec) 00:27:28.758 slat (usec): min=9, max=249569, avg=4071.66, stdev=18371.69 00:27:28.758 clat (msec): min=16, max=975, avg=309.71, stdev=217.53 00:27:28.758 lat (msec): min=18, max=975, avg=313.78, stdev=220.78 00:27:28.758 clat percentiles (msec): 00:27:28.758 | 1.00th=[ 41], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 97], 00:27:28.758 | 30.00th=[ 133], 40.00th=[ 186], 50.00th=[ 215], 60.00th=[ 363], 00:27:28.758 | 70.00th=[ 435], 80.00th=[ 523], 90.00th=[ 625], 95.00th=[ 701], 00:27:28.758 | 99.00th=[ 827], 99.50th=[ 852], 99.90th=[ 877], 99.95th=[ 978], 00:27:28.758 | 99.99th=[ 978] 00:27:28.758 bw ( KiB/s): min= 8192, max=169984, per=6.02%, avg=51148.80, stdev=39482.54, samples=20 00:27:28.758 iops : min= 32, max= 664, avg=199.80, stdev=154.23, samples=20 00:27:28.758 lat (msec) : 20=0.10%, 50=1.21%, 100=19.79%, 250=31.91%, 500=24.59% 00:27:28.758 lat (msec) : 750=19.35%, 1000=3.06% 00:27:28.758 cpu : usr=0.09%, sys=0.70%, ctx=371, majf=0, minf=4097 00:27:28.758 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:27:28.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.758 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.758 job6: (groupid=0, jobs=1): err= 0: pid=3231669: Tue Oct 1 15:45:06 2024 00:27:28.758 read: IOPS=222, BW=55.7MiB/s (58.4MB/s)(564MiB/10125msec) 00:27:28.758 slat (usec): min=9, max=425044, avg=3603.07, stdev=21002.87 00:27:28.758 clat (msec): min=2, max=930, avg=283.37, stdev=243.83 00:27:28.758 lat (msec): min=2, max=1080, avg=286.97, stdev=246.26 00:27:28.758 clat percentiles (msec): 00:27:28.758 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 22], 20.00th=[ 62], 00:27:28.758 | 30.00th=[ 86], 40.00th=[ 157], 50.00th=[ 251], 60.00th=[ 309], 00:27:28.758 | 70.00th=[ 388], 80.00th=[ 451], 90.00th=[ 651], 95.00th=[ 827], 00:27:28.758 | 99.00th=[ 919], 99.50th=[ 927], 99.90th=[ 927], 99.95th=[ 927], 00:27:28.758 | 99.99th=[ 927] 00:27:28.758 bw ( KiB/s): min= 8704, max=214016, per=6.61%, avg=56115.20, stdev=51239.97, samples=20 00:27:28.758 iops : min= 34, max= 836, avg=219.20, stdev=200.16, samples=20 00:27:28.758 lat (msec) : 4=6.12%, 10=0.44%, 20=1.33%, 50=10.37%, 100=19.55% 00:27:28.758 lat (msec) : 250=12.28%, 500=33.69%, 750=8.64%, 1000=7.58% 00:27:28.758 cpu : usr=0.09%, sys=0.89%, ctx=731, majf=0, minf=4097 00:27:28.758 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:27:28.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.758 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.758 job7: (groupid=0, jobs=1): err= 0: pid=3231679: Tue Oct 1 15:45:06 2024 00:27:28.758 read: IOPS=172, BW=43.2MiB/s (45.3MB/s)(438MiB/10133msec) 00:27:28.758 slat (usec): min=6, max=490253, avg=5388.00, stdev=25464.35 00:27:28.758 clat (msec): min=13, max=954, avg=364.33, stdev=286.64 00:27:28.758 lat (msec): min=13, max=1011, avg=369.71, stdev=290.36 00:27:28.758 clat percentiles (msec): 00:27:28.759 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:27:28.759 | 30.00th=[ 101], 40.00th=[ 249], 50.00th=[ 317], 60.00th=[ 443], 00:27:28.759 | 70.00th=[ 567], 80.00th=[ 667], 90.00th=[ 785], 95.00th=[ 835], 00:27:28.759 | 99.00th=[ 902], 99.50th=[ 919], 99.90th=[ 953], 99.95th=[ 953], 00:27:28.759 | 99.99th=[ 953] 00:27:28.759 bw ( KiB/s): min=13312, max=212480, per=5.09%, avg=43212.80, stdev=44014.26, samples=20 00:27:28.759 iops : min= 52, max= 830, avg=168.80, stdev=171.93, samples=20 00:27:28.759 lat (msec) : 20=1.43%, 50=27.74%, 100=0.91%, 250=10.10%, 500=24.32% 00:27:28.759 lat (msec) : 750=22.89%, 1000=12.61% 00:27:28.759 cpu : usr=0.01%, sys=0.62%, ctx=290, majf=0, minf=4097 00:27:28.759 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:27:28.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.759 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.759 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.759 job8: (groupid=0, jobs=1): err= 0: pid=3231708: Tue Oct 1 15:45:06 2024 00:27:28.759 read: IOPS=1037, BW=259MiB/s (272MB/s)(2600MiB/10021msec) 00:27:28.759 slat (usec): min=8, max=56959, avg=942.81, stdev=3307.48 00:27:28.759 clat (msec): min=8, max=327, avg=60.66, stdev=43.51 00:27:28.759 lat (msec): min=8, max=327, avg=61.60, stdev=44.12 00:27:28.759 clat percentiles (msec): 00:27:28.759 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 40], 00:27:28.759 | 30.00th=[ 42], 40.00th=[ 44], 50.00th=[ 46], 60.00th=[ 47], 00:27:28.759 | 70.00th=[ 49], 80.00th=[ 55], 90.00th=[ 125], 95.00th=[ 155], 00:27:28.759 | 99.00th=[ 255], 99.50th=[ 268], 99.90th=[ 296], 99.95th=[ 309], 00:27:28.759 | 99.99th=[ 317] 00:27:28.759 bw ( KiB/s): min=57856, max=415232, per=31.16%, avg=264601.60, stdev=127284.50, samples=20 00:27:28.759 iops : min= 226, max= 1622, avg=1033.60, stdev=497.21, samples=20 00:27:28.759 lat (msec) : 10=0.04%, 20=0.33%, 50=73.77%, 100=12.14%, 250=12.56% 00:27:28.759 lat (msec) : 500=1.17% 00:27:28.759 cpu : usr=0.32%, sys=3.25%, ctx=1354, majf=0, minf=3534 00:27:28.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:28.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.759 issued rwts: total=10399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.759 job9: (groupid=0, jobs=1): err= 0: pid=3231724: Tue Oct 1 15:45:06 2024 00:27:28.759 read: IOPS=223, BW=56.0MiB/s (58.7MB/s)(567MiB/10129msec) 00:27:28.759 slat (usec): min=12, max=402777, avg=2959.80, stdev=17998.39 00:27:28.759 clat (msec): min=23, max=891, avg=282.47, stdev=227.24 00:27:28.759 lat (msec): min=23, max=1018, avg=285.43, stdev=229.23 00:27:28.759 clat percentiles (msec): 00:27:28.759 | 1.00th=[ 32], 5.00th=[ 54], 10.00th=[ 77], 20.00th=[ 113], 00:27:28.759 | 30.00th=[ 126], 40.00th=[ 140], 50.00th=[ 169], 60.00th=[ 241], 00:27:28.759 | 70.00th=[ 338], 80.00th=[ 518], 90.00th=[ 667], 95.00th=[ 743], 00:27:28.759 | 99.00th=[ 835], 99.50th=[ 860], 99.90th=[ 885], 99.95th=[ 894], 00:27:28.759 | 99.99th=[ 894] 00:27:28.759 bw ( KiB/s): min= 3584, max=139264, per=6.65%, avg=56422.40, stdev=37969.70, samples=20 00:27:28.759 iops : min= 14, max= 544, avg=220.40, stdev=148.32, samples=20 00:27:28.759 lat (msec) : 50=4.54%, 100=10.67%, 250=46.56%, 500=16.71%, 750=16.67% 00:27:28.759 lat (msec) : 1000=4.85% 00:27:28.759 cpu : usr=0.02%, sys=0.88%, ctx=453, majf=0, minf=4097 00:27:28.759 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:27:28.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.759 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.759 job10: (groupid=0, jobs=1): err= 0: pid=3231731: Tue Oct 1 15:45:06 2024 00:27:28.759 read: IOPS=189, BW=47.5MiB/s (49.8MB/s)(481MiB/10141msec) 00:27:28.759 slat (usec): min=11, max=482159, avg=3446.36, stdev=20875.25 00:27:28.759 clat (msec): min=13, max=1188, avg=333.20, stdev=248.16 00:27:28.759 lat (msec): min=13, max=1188, avg=336.64, stdev=251.25 00:27:28.759 clat percentiles (msec): 00:27:28.759 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 66], 00:27:28.759 | 30.00th=[ 102], 40.00th=[ 234], 50.00th=[ 309], 60.00th=[ 418], 00:27:28.759 | 70.00th=[ 468], 80.00th=[ 550], 90.00th=[ 676], 95.00th=[ 785], 00:27:28.759 | 99.00th=[ 927], 99.50th=[ 936], 99.90th=[ 1150], 99.95th=[ 1183], 00:27:28.759 | 99.99th=[ 1183] 00:27:28.759 bw ( KiB/s): min=15872, max=139264, per=5.61%, avg=47641.60, stdev=33385.54, samples=20 00:27:28.759 iops : min= 62, max= 544, avg=186.10, stdev=130.41, samples=20 00:27:28.759 lat (msec) : 20=0.42%, 50=11.06%, 100=18.29%, 250=12.05%, 500=33.97% 00:27:28.759 lat (msec) : 750=16.78%, 1000=7.22%, 2000=0.21% 00:27:28.759 cpu : usr=0.03%, sys=0.80%, ctx=475, majf=0, minf=4097 00:27:28.759 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:27:28.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.759 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.759 issued rwts: total=1925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.759 00:27:28.759 Run status group 0 (all jobs): 00:27:28.759 READ: bw=829MiB/s (869MB/s), 34.1MiB/s-259MiB/s (35.8MB/s-272MB/s), io=8432MiB (8841MB), run=10021-10169msec 00:27:28.759 00:27:28.759 Disk stats (read/write): 00:27:28.759 nvme0n1: ios=3976/0, merge=0/0, ticks=1217962/0, in_queue=1217962, util=96.35% 00:27:28.759 nvme10n1: ios=4868/0, merge=0/0, ticks=1254988/0, in_queue=1254988, util=96.77% 00:27:28.759 nvme1n1: ios=9373/0, merge=0/0, ticks=1197873/0, in_queue=1197873, util=97.00% 00:27:28.759 nvme2n1: ios=2689/0, merge=0/0, ticks=1240971/0, in_queue=1240971, util=97.29% 00:27:28.759 nvme3n1: ios=4402/0, merge=0/0, ticks=1230468/0, in_queue=1230468, util=97.35% 00:27:28.759 nvme4n1: ios=4064/0, merge=0/0, ticks=1237926/0, in_queue=1237926, util=97.90% 00:27:28.759 nvme5n1: ios=4387/0, merge=0/0, ticks=1238315/0, in_queue=1238315, util=98.07% 00:27:28.759 nvme6n1: ios=3434/0, merge=0/0, ticks=1227044/0, in_queue=1227044, util=98.33% 00:27:28.759 nvme7n1: ios=20203/0, merge=0/0, ticks=1229313/0, in_queue=1229313, util=98.83% 00:27:28.759 nvme8n1: ios=4461/0, merge=0/0, ticks=1233619/0, in_queue=1233619, util=99.05% 00:27:28.759 nvme9n1: ios=3744/0, merge=0/0, ticks=1228998/0, in_queue=1228998, util=99.29% 00:27:28.759 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:28.759 [global] 00:27:28.759 thread=1 00:27:28.759 invalidate=1 00:27:28.759 rw=randwrite 00:27:28.759 time_based=1 00:27:28.759 runtime=10 00:27:28.759 ioengine=libaio 00:27:28.759 direct=1 00:27:28.759 bs=262144 00:27:28.759 iodepth=64 00:27:28.759 norandommap=1 00:27:28.759 numjobs=1 00:27:28.759 00:27:28.759 [job0] 00:27:28.759 filename=/dev/nvme0n1 00:27:28.759 [job1] 00:27:28.759 filename=/dev/nvme10n1 00:27:28.759 [job2] 00:27:28.759 filename=/dev/nvme1n1 00:27:28.759 [job3] 00:27:28.759 filename=/dev/nvme2n1 00:27:28.759 [job4] 00:27:28.759 filename=/dev/nvme3n1 00:27:28.759 [job5] 00:27:28.759 filename=/dev/nvme4n1 00:27:28.759 [job6] 00:27:28.759 filename=/dev/nvme5n1 00:27:28.759 [job7] 00:27:28.759 filename=/dev/nvme6n1 00:27:28.759 [job8] 00:27:28.759 filename=/dev/nvme7n1 00:27:28.759 [job9] 00:27:28.759 filename=/dev/nvme8n1 00:27:28.759 [job10] 00:27:28.759 filename=/dev/nvme9n1 00:27:28.759 Could not set queue depth (nvme0n1) 00:27:28.759 Could not set queue depth (nvme10n1) 00:27:28.759 Could not set queue depth (nvme1n1) 00:27:28.759 Could not set queue depth (nvme2n1) 00:27:28.759 Could not set queue depth (nvme3n1) 00:27:28.759 Could not set queue depth (nvme4n1) 00:27:28.759 Could not set queue depth (nvme5n1) 00:27:28.759 Could not set queue depth (nvme6n1) 00:27:28.759 Could not set queue depth (nvme7n1) 00:27:28.759 Could not set queue depth (nvme8n1) 00:27:28.759 Could not set queue depth (nvme9n1) 00:27:28.759 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.759 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.759 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.760 fio-3.35 00:27:28.760 Starting 11 threads 00:27:38.768 00:27:38.768 job0: (groupid=0, jobs=1): err= 0: pid=3232830: Tue Oct 1 15:45:17 2024 00:27:38.768 write: IOPS=327, BW=81.8MiB/s (85.8MB/s)(829MiB/10133msec); 0 zone resets 00:27:38.768 slat (usec): min=19, max=155271, avg=2767.62, stdev=6717.98 00:27:38.768 clat (msec): min=2, max=425, avg=192.63, stdev=78.82 00:27:38.768 lat (msec): min=2, max=455, avg=195.40, stdev=79.79 00:27:38.768 clat percentiles (msec): 00:27:38.768 | 1.00th=[ 15], 5.00th=[ 57], 10.00th=[ 69], 20.00th=[ 142], 00:27:38.768 | 30.00th=[ 165], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 201], 00:27:38.768 | 70.00th=[ 224], 80.00th=[ 255], 90.00th=[ 288], 95.00th=[ 326], 00:27:38.768 | 99.00th=[ 397], 99.50th=[ 414], 99.90th=[ 426], 99.95th=[ 426], 00:27:38.768 | 99.99th=[ 426] 00:27:38.768 bw ( KiB/s): min=49152, max=166400, per=6.39%, avg=83302.40, stdev=31606.84, samples=20 00:27:38.768 iops : min= 192, max= 650, avg=325.40, stdev=123.46, samples=20 00:27:38.768 lat (msec) : 4=0.06%, 10=0.39%, 20=0.96%, 50=1.84%, 100=11.21% 00:27:38.768 lat (msec) : 250=64.58%, 500=20.95% 00:27:38.768 cpu : usr=0.70%, sys=1.19%, ctx=1097, majf=0, minf=1 00:27:38.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:38.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.768 issued rwts: total=0,3317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.768 job1: (groupid=0, jobs=1): err= 0: pid=3232854: Tue Oct 1 15:45:17 2024 00:27:38.768 write: IOPS=441, BW=110MiB/s (116MB/s)(1114MiB/10084msec); 0 zone resets 00:27:38.768 slat (usec): min=26, max=129421, avg=2021.53, stdev=4894.75 00:27:38.768 clat (msec): min=2, max=365, avg=142.80, stdev=73.92 00:27:38.768 lat (msec): min=2, max=365, avg=144.82, stdev=74.96 00:27:38.768 clat percentiles (msec): 00:27:38.768 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 70], 20.00th=[ 97], 00:27:38.768 | 30.00th=[ 107], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 144], 00:27:38.768 | 70.00th=[ 171], 80.00th=[ 213], 90.00th=[ 257], 95.00th=[ 284], 00:27:38.768 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 368], 99.95th=[ 368], 00:27:38.768 | 99.99th=[ 368] 00:27:38.768 bw ( KiB/s): min=55296, max=265728, per=8.63%, avg=112435.20, stdev=54520.67, samples=20 00:27:38.768 iops : min= 216, max= 1038, avg=439.20, stdev=212.97, samples=20 00:27:38.768 lat (msec) : 4=0.18%, 10=1.55%, 20=1.86%, 50=3.88%, 100=14.03% 00:27:38.768 lat (msec) : 250=66.20%, 500=12.30% 00:27:38.768 cpu : usr=1.11%, sys=1.49%, ctx=1619, majf=0, minf=1 00:27:38.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:38.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.768 issued rwts: total=0,4455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.768 job2: (groupid=0, jobs=1): err= 0: pid=3232874: Tue Oct 1 15:45:17 2024 00:27:38.768 write: IOPS=401, BW=100MiB/s (105MB/s)(1018MiB/10136msec); 0 zone resets 00:27:38.768 slat (usec): min=27, max=59073, avg=2382.84, stdev=4834.28 00:27:38.768 clat (msec): min=13, max=414, avg=156.82, stdev=70.97 00:27:38.768 lat (msec): min=13, max=414, avg=159.20, stdev=71.87 00:27:38.768 clat percentiles (msec): 00:27:38.768 | 1.00th=[ 24], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 87], 00:27:38.768 | 30.00th=[ 99], 40.00th=[ 134], 50.00th=[ 150], 60.00th=[ 180], 00:27:38.768 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 251], 95.00th=[ 292], 00:27:38.768 | 99.00th=[ 363], 99.50th=[ 372], 99.90th=[ 397], 99.95th=[ 397], 00:27:38.768 | 99.99th=[ 414] 00:27:38.768 bw ( KiB/s): min=57856, max=192512, per=7.87%, avg=102630.40, stdev=40646.03, samples=20 00:27:38.768 iops : min= 226, max= 752, avg=400.90, stdev=158.77, samples=20 00:27:38.768 lat (msec) : 20=0.47%, 50=1.96%, 100=27.99%, 250=59.46%, 500=10.12% 00:27:38.768 cpu : usr=0.90%, sys=1.14%, ctx=1136, majf=0, minf=1 00:27:38.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:38.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.768 issued rwts: total=0,4073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.768 job3: (groupid=0, jobs=1): err= 0: pid=3232886: Tue Oct 1 15:45:17 2024 00:27:38.768 write: IOPS=349, BW=87.3MiB/s (91.5MB/s)(883MiB/10122msec); 0 zone resets 00:27:38.768 slat (usec): min=25, max=541946, avg=2520.66, stdev=10997.00 00:27:38.768 clat (msec): min=2, max=702, avg=180.73, stdev=101.29 00:27:38.768 lat (msec): min=2, max=702, avg=183.25, stdev=102.27 00:27:38.768 clat percentiles (msec): 00:27:38.768 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 69], 20.00th=[ 90], 00:27:38.768 | 30.00th=[ 113], 40.00th=[ 161], 50.00th=[ 197], 60.00th=[ 207], 00:27:38.768 | 70.00th=[ 218], 80.00th=[ 247], 90.00th=[ 284], 95.00th=[ 309], 00:27:38.768 | 99.00th=[ 617], 99.50th=[ 651], 99.90th=[ 693], 99.95th=[ 693], 00:27:38.768 | 99.99th=[ 701] 00:27:38.768 bw ( KiB/s): min=35840, max=189440, per=6.82%, avg=88832.00, stdev=38638.91, samples=20 00:27:38.768 iops : min= 140, max= 740, avg=347.00, stdev=150.93, samples=20 00:27:38.768 lat (msec) : 4=0.06%, 10=0.65%, 20=1.05%, 50=6.14%, 100=17.01% 00:27:38.768 lat (msec) : 250=56.13%, 500=17.18%, 750=1.78% 00:27:38.768 cpu : usr=0.82%, sys=1.02%, ctx=1226, majf=0, minf=1 00:27:38.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:38.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.768 issued rwts: total=0,3533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.768 job4: (groupid=0, jobs=1): err= 0: pid=3232893: Tue Oct 1 15:45:17 2024 00:27:38.768 write: IOPS=423, BW=106MiB/s (111MB/s)(1074MiB/10134msec); 0 zone resets 00:27:38.768 slat (usec): min=19, max=46816, avg=2297.89, stdev=4736.62 00:27:38.768 clat (msec): min=11, max=407, avg=148.59, stdev=77.54 00:27:38.768 lat (msec): min=11, max=407, avg=150.89, stdev=78.62 00:27:38.768 clat percentiles (msec): 00:27:38.768 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 69], 00:27:38.768 | 30.00th=[ 73], 40.00th=[ 110], 50.00th=[ 157], 60.00th=[ 180], 00:27:38.768 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 245], 95.00th=[ 288], 00:27:38.768 | 99.00th=[ 376], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 401], 00:27:38.768 | 99.99th=[ 409] 00:27:38.768 bw ( KiB/s): min=47104, max=237568, per=8.31%, avg=108339.20, stdev=56596.49, samples=20 00:27:38.768 iops : min= 184, max= 928, avg=423.20, stdev=221.08, samples=20 00:27:38.768 lat (msec) : 20=0.19%, 50=0.63%, 100=38.24%, 250=51.63%, 500=9.31% 00:27:38.768 cpu : usr=1.11%, sys=1.32%, ctx=1154, majf=0, minf=1 00:27:38.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:27:38.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.768 issued rwts: total=0,4296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.769 job5: (groupid=0, jobs=1): err= 0: pid=3232903: Tue Oct 1 15:45:17 2024 00:27:38.769 write: IOPS=590, BW=148MiB/s (155MB/s)(1489MiB/10091msec); 0 zone resets 00:27:38.769 slat (usec): min=8, max=18538, avg=1545.24, stdev=3057.46 00:27:38.769 clat (msec): min=2, max=257, avg=106.84, stdev=39.13 00:27:38.769 lat (msec): min=2, max=257, avg=108.38, stdev=39.62 00:27:38.769 clat percentiles (msec): 00:27:38.769 | 1.00th=[ 11], 5.00th=[ 51], 10.00th=[ 66], 20.00th=[ 71], 00:27:38.769 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 114], 60.00th=[ 125], 00:27:38.769 | 70.00th=[ 133], 80.00th=[ 140], 90.00th=[ 148], 95.00th=[ 167], 00:27:38.769 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 241], 99.95th=[ 251], 00:27:38.769 | 99.99th=[ 257] 00:27:38.769 bw ( KiB/s): min=94208, max=260608, per=11.58%, avg=150908.35, stdev=45294.45, samples=20 00:27:38.769 iops : min= 368, max= 1018, avg=589.45, stdev=176.87, samples=20 00:27:38.769 lat (msec) : 4=0.17%, 10=0.71%, 20=0.89%, 50=3.17%, 100=40.83% 00:27:38.769 lat (msec) : 250=54.19%, 500=0.05% 00:27:38.769 cpu : usr=1.28%, sys=1.71%, ctx=1920, majf=0, minf=2 00:27:38.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:38.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.769 issued rwts: total=0,5957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.769 job6: (groupid=0, jobs=1): err= 0: pid=3232904: Tue Oct 1 15:45:17 2024 00:27:38.769 write: IOPS=463, BW=116MiB/s (122MB/s)(1173MiB/10116msec); 0 zone resets 00:27:38.769 slat (usec): min=24, max=43153, avg=1676.92, stdev=4227.35 00:27:38.769 clat (msec): min=11, max=366, avg=136.26, stdev=80.06 00:27:38.769 lat (msec): min=11, max=369, avg=137.93, stdev=81.06 00:27:38.769 clat percentiles (msec): 00:27:38.769 | 1.00th=[ 32], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 57], 00:27:38.769 | 30.00th=[ 68], 40.00th=[ 83], 50.00th=[ 138], 60.00th=[ 153], 00:27:38.769 | 70.00th=[ 176], 80.00th=[ 213], 90.00th=[ 253], 95.00th=[ 279], 00:27:38.769 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 363], 99.95th=[ 368], 00:27:38.769 | 99.99th=[ 368] 00:27:38.769 bw ( KiB/s): min=53248, max=315392, per=9.09%, avg=118525.35, stdev=72548.22, samples=20 00:27:38.769 iops : min= 208, max= 1232, avg=462.95, stdev=283.33, samples=20 00:27:38.769 lat (msec) : 20=0.40%, 50=15.79%, 100=25.92%, 250=47.51%, 500=10.38% 00:27:38.769 cpu : usr=0.98%, sys=1.61%, ctx=1894, majf=0, minf=1 00:27:38.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:38.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.769 issued rwts: total=0,4692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.769 job7: (groupid=0, jobs=1): err= 0: pid=3232905: Tue Oct 1 15:45:17 2024 00:27:38.769 write: IOPS=551, BW=138MiB/s (144MB/s)(1390MiB/10091msec); 0 zone resets 00:27:38.769 slat (usec): min=22, max=18565, avg=1715.28, stdev=3197.78 00:27:38.769 clat (msec): min=20, max=226, avg=114.35, stdev=30.95 00:27:38.769 lat (msec): min=20, max=226, avg=116.07, stdev=31.30 00:27:38.769 clat percentiles (msec): 00:27:38.769 | 1.00th=[ 74], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 85], 00:27:38.769 | 30.00th=[ 88], 40.00th=[ 96], 50.00th=[ 115], 60.00th=[ 125], 00:27:38.769 | 70.00th=[ 133], 80.00th=[ 140], 90.00th=[ 146], 95.00th=[ 169], 00:27:38.769 | 99.00th=[ 211], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 226], 00:27:38.769 | 99.99th=[ 226] 00:27:38.769 bw ( KiB/s): min=94208, max=193024, per=10.80%, avg=140748.80, stdev=31767.85, samples=20 00:27:38.769 iops : min= 368, max= 754, avg=549.80, stdev=124.09, samples=20 00:27:38.769 lat (msec) : 50=0.22%, 100=41.74%, 250=58.05% 00:27:38.769 cpu : usr=1.42%, sys=1.69%, ctx=1557, majf=0, minf=1 00:27:38.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:38.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.769 issued rwts: total=0,5561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.769 job8: (groupid=0, jobs=1): err= 0: pid=3232906: Tue Oct 1 15:45:17 2024 00:27:38.769 write: IOPS=450, BW=113MiB/s (118MB/s)(1135MiB/10077msec); 0 zone resets 00:27:38.769 slat (usec): min=15, max=155996, avg=2197.26, stdev=5382.28 00:27:38.769 clat (msec): min=19, max=394, avg=139.82, stdev=86.85 00:27:38.769 lat (msec): min=19, max=394, avg=142.01, stdev=88.07 00:27:38.769 clat percentiles (msec): 00:27:38.769 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 48], 00:27:38.769 | 30.00th=[ 72], 40.00th=[ 88], 50.00th=[ 122], 60.00th=[ 167], 00:27:38.769 | 70.00th=[ 201], 80.00th=[ 222], 90.00th=[ 262], 95.00th=[ 279], 00:27:38.769 | 99.00th=[ 351], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 397], 00:27:38.769 | 99.99th=[ 397] 00:27:38.769 bw ( KiB/s): min=47616, max=345600, per=8.79%, avg=114611.20, stdev=84522.74, samples=20 00:27:38.769 iops : min= 186, max= 1350, avg=447.70, stdev=330.17, samples=20 00:27:38.769 lat (msec) : 20=0.09%, 50=20.93%, 100=24.96%, 250=40.20%, 500=13.83% 00:27:38.769 cpu : usr=1.00%, sys=1.42%, ctx=1118, majf=0, minf=1 00:27:38.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:38.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.769 issued rwts: total=0,4540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.769 job9: (groupid=0, jobs=1): err= 0: pid=3232907: Tue Oct 1 15:45:17 2024 00:27:38.769 write: IOPS=592, BW=148MiB/s (155MB/s)(1495MiB/10091msec); 0 zone resets 00:27:38.769 slat (usec): min=21, max=17948, avg=1462.61, stdev=2932.79 00:27:38.769 clat (msec): min=4, max=260, avg=106.53, stdev=37.04 00:27:38.769 lat (msec): min=4, max=260, avg=108.00, stdev=37.43 00:27:38.769 clat percentiles (msec): 00:27:38.769 | 1.00th=[ 48], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 80], 00:27:38.769 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 94], 60.00th=[ 112], 00:27:38.769 | 70.00th=[ 125], 80.00th=[ 134], 90.00th=[ 146], 95.00th=[ 194], 00:27:38.769 | 99.00th=[ 215], 99.50th=[ 230], 99.90th=[ 251], 99.95th=[ 255], 00:27:38.769 | 99.99th=[ 262] 00:27:38.769 bw ( KiB/s): min=79872, max=230400, per=11.62%, avg=151424.00, stdev=36771.02, samples=20 00:27:38.769 iops : min= 312, max= 900, avg=591.50, stdev=143.64, samples=20 00:27:38.769 lat (msec) : 10=0.12%, 20=0.13%, 50=1.04%, 100=51.62%, 250=46.99% 00:27:38.769 lat (msec) : 500=0.10% 00:27:38.769 cpu : usr=1.38%, sys=1.76%, ctx=2052, majf=0, minf=1 00:27:38.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:38.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.769 issued rwts: total=0,5978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.769 job10: (groupid=0, jobs=1): err= 0: pid=3232908: Tue Oct 1 15:45:17 2024 00:27:38.769 write: IOPS=516, BW=129MiB/s (135MB/s)(1300MiB/10071msec); 0 zone resets 00:27:38.769 slat (usec): min=11, max=100228, avg=1737.05, stdev=4071.49 00:27:38.769 clat (msec): min=3, max=362, avg=122.21, stdev=70.07 00:27:38.769 lat (msec): min=4, max=367, avg=123.94, stdev=70.94 00:27:38.769 clat percentiles (msec): 00:27:38.769 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 50], 00:27:38.769 | 30.00th=[ 79], 40.00th=[ 107], 50.00th=[ 112], 60.00th=[ 117], 00:27:38.769 | 70.00th=[ 144], 80.00th=[ 161], 90.00th=[ 232], 95.00th=[ 275], 00:27:38.769 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 359], 00:27:38.769 | 99.99th=[ 363] 00:27:38.769 bw ( KiB/s): min=49152, max=362496, per=10.09%, avg=131481.60, stdev=78142.21, samples=20 00:27:38.769 iops : min= 192, max= 1416, avg=513.60, stdev=305.24, samples=20 00:27:38.769 lat (msec) : 4=0.02%, 10=0.27%, 20=0.50%, 50=19.47%, 100=12.85% 00:27:38.769 lat (msec) : 250=59.86%, 500=7.04% 00:27:38.769 cpu : usr=1.12%, sys=1.45%, ctx=1590, majf=0, minf=1 00:27:38.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:38.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.769 issued rwts: total=0,5199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.769 00:27:38.769 Run status group 0 (all jobs): 00:27:38.769 WRITE: bw=1273MiB/s (1335MB/s), 81.8MiB/s-148MiB/s (85.8MB/s-155MB/s), io=12.6GiB (13.5GB), run=10071-10136msec 00:27:38.769 00:27:38.769 Disk stats (read/write): 00:27:38.769 nvme0n1: ios=48/6565, merge=0/0, ticks=3369/1213859, in_queue=1217228, util=100.00% 00:27:38.769 nvme10n1: ios=44/8561, merge=0/0, ticks=93/1203893, in_queue=1203986, util=97.02% 00:27:38.769 nvme1n1: ios=43/8075, merge=0/0, ticks=1303/1221789, in_queue=1223092, util=100.00% 00:27:38.769 nvme2n1: ios=45/7016, merge=0/0, ticks=5448/1159606, in_queue=1165054, util=100.00% 00:27:38.769 nvme3n1: ios=45/8519, merge=0/0, ticks=1228/1220117, in_queue=1221345, util=100.00% 00:27:38.769 nvme4n1: ios=0/11904, merge=0/0, ticks=0/1232560, in_queue=1232560, util=97.76% 00:27:38.769 nvme5n1: ios=0/9345, merge=0/0, ticks=0/1235860, in_queue=1235860, util=97.97% 00:27:38.769 nvme6n1: ios=40/11111, merge=0/0, ticks=1498/1230269, in_queue=1231767, util=100.00% 00:27:38.769 nvme7n1: ios=0/8833, merge=0/0, ticks=0/1188684, in_queue=1188684, util=98.62% 00:27:38.769 nvme8n1: ios=0/11945, merge=0/0, ticks=0/1235122, in_queue=1235122, util=98.91% 00:27:38.769 nvme9n1: ios=0/10123, merge=0/0, ticks=0/1191055, in_queue=1191055, util=99.06% 00:27:38.769 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:38.769 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:38.769 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.769 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:38.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:38.769 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:38.769 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:38.769 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.770 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:38.770 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.770 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:39.031 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.031 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:39.603 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:39.603 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:39.603 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:39.603 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.604 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:39.864 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:39.864 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:39.864 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.865 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:40.125 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.125 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:40.386 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.386 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:40.647 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.647 15:45:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:40.908 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:40.908 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.908 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.908 rmmod nvme_tcp 00:27:40.908 rmmod nvme_fabrics 00:27:40.908 rmmod nvme_keyring 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 3222962 ']' 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 3222962 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3222962 ']' 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3222962 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3222962 00:27:41.169 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:41.170 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:41.170 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3222962' 00:27:41.170 killing process with pid 3222962 00:27:41.170 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3222962 00:27:41.170 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3222962 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:27:41.431 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.432 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.432 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.432 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.432 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.346 15:45:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:43.346 00:27:43.346 real 1m18.298s 00:27:43.346 user 4m55.965s 00:27:43.346 sys 0m16.761s 00:27:43.346 15:45:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:43.346 15:45:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:43.346 ************************************ 00:27:43.346 END TEST nvmf_multiconnection 00:27:43.346 ************************************ 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:43.607 ************************************ 00:27:43.607 START TEST nvmf_initiator_timeout 00:27:43.607 ************************************ 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:43.607 * Looking for test storage... 00:27:43.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:27:43.607 15:45:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.607 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:43.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.869 --rc genhtml_branch_coverage=1 00:27:43.869 --rc genhtml_function_coverage=1 00:27:43.869 --rc genhtml_legend=1 00:27:43.869 --rc geninfo_all_blocks=1 00:27:43.869 --rc geninfo_unexecuted_blocks=1 00:27:43.869 00:27:43.869 ' 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:43.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.869 --rc genhtml_branch_coverage=1 00:27:43.869 --rc genhtml_function_coverage=1 00:27:43.869 --rc genhtml_legend=1 00:27:43.869 --rc geninfo_all_blocks=1 00:27:43.869 --rc geninfo_unexecuted_blocks=1 00:27:43.869 00:27:43.869 ' 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:43.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.869 --rc genhtml_branch_coverage=1 00:27:43.869 --rc genhtml_function_coverage=1 00:27:43.869 --rc genhtml_legend=1 00:27:43.869 --rc geninfo_all_blocks=1 00:27:43.869 --rc geninfo_unexecuted_blocks=1 00:27:43.869 00:27:43.869 ' 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:43.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.869 --rc genhtml_branch_coverage=1 00:27:43.869 --rc genhtml_function_coverage=1 00:27:43.869 --rc genhtml_legend=1 00:27:43.869 --rc geninfo_all_blocks=1 00:27:43.869 --rc geninfo_unexecuted_blocks=1 00:27:43.869 00:27:43.869 ' 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.869 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.870 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.015 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.015 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.015 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.015 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.015 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.015 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.015 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:52.016 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:52.016 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:52.016 Found net devices under 0000:31:00.0: cvl_0_0 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:52.016 Found net devices under 0000:31:00.1: cvl_0_1 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:27:52.016 00:27:52.016 --- 10.0.0.2 ping statistics --- 00:27:52.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.016 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:27:52.016 00:27:52.016 --- 10.0.0.1 ping statistics --- 00:27:52.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.016 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:52.016 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=3239687 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 3239687 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3239687 ']' 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.017 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.017 [2024-10-01 15:45:30.849938] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:52.017 [2024-10-01 15:45:30.850009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.017 [2024-10-01 15:45:30.891791] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:52.017 [2024-10-01 15:45:30.939010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.017 [2024-10-01 15:45:30.986590] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.017 [2024-10-01 15:45:30.986644] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.017 [2024-10-01 15:45:30.986652] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.017 [2024-10-01 15:45:30.986660] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.017 [2024-10-01 15:45:30.986666] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.017 [2024-10-01 15:45:30.986818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.017 [2024-10-01 15:45:30.986964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.017 [2024-10-01 15:45:30.987052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.017 [2024-10-01 15:45:30.987052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.276 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.536 Malloc0 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.536 Delay0 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.536 [2024-10-01 15:45:31.770075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.536 [2024-10-01 15:45:31.810507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.536 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.537 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:54.447 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:54.447 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:54.447 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:54.447 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:54.447 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3240722 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:56.357 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:56.357 [global] 00:27:56.357 thread=1 00:27:56.357 invalidate=1 00:27:56.357 rw=write 00:27:56.357 time_based=1 00:27:56.357 runtime=60 00:27:56.357 ioengine=libaio 00:27:56.357 direct=1 00:27:56.357 bs=4096 00:27:56.357 iodepth=1 00:27:56.357 norandommap=0 00:27:56.357 numjobs=1 00:27:56.357 00:27:56.357 verify_dump=1 00:27:56.357 verify_backlog=512 00:27:56.357 verify_state_save=0 00:27:56.357 do_verify=1 00:27:56.357 verify=crc32c-intel 00:27:56.357 [job0] 00:27:56.357 filename=/dev/nvme0n1 00:27:56.357 Could not set queue depth (nvme0n1) 00:27:56.357 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:56.357 fio-3.35 00:27:56.357 Starting 1 thread 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.660 true 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.660 true 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.660 true 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.660 true 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.660 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.200 true 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.200 true 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.200 true 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.200 true 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:02.200 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3240722 00:28:58.635 00:28:58.635 job0: (groupid=0, jobs=1): err= 0: pid=3240887: Tue Oct 1 15:46:35 2024 00:28:58.635 read: IOPS=8, BW=32.1KiB/s (32.9kB/s)(1928KiB/60006msec) 00:28:58.635 slat (usec): min=8, max=6652, avg=40.02, stdev=301.84 00:28:58.635 clat (usec): min=746, max=41838k, avg=123790.32, stdev=1904027.54 00:28:58.635 lat (usec): min=773, max=41838k, avg=123830.34, stdev=1904026.93 00:28:58.635 clat percentiles (usec): 00:28:58.635 | 1.00th=[ 865], 5.00th=[ 1037], 10.00th=[ 1123], 00:28:58.635 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:28:58.635 | 50.00th=[ 41681], 60.00th=[ 42206], 70.00th=[ 42206], 00:28:58.635 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:28:58.635 | 99.00th=[ 42730], 99.50th=[ 43254], 99.90th=[17112761], 00:28:58.635 | 99.95th=[17112761], 99.99th=[17112761] 00:28:58.635 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60006msec); 0 zone resets 00:28:58.635 slat (nsec): min=9491, max=60692, avg=28156.63, stdev=9748.53 00:28:58.635 clat (usec): min=259, max=820, avg=579.76, stdev=99.49 00:28:58.635 lat (usec): min=270, max=853, avg=607.92, stdev=103.97 00:28:58.635 clat percentiles (usec): 00:28:58.635 | 1.00th=[ 343], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 494], 00:28:58.635 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:28:58.635 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 725], 00:28:58.635 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 824], 99.95th=[ 824], 00:28:58.635 | 99.99th=[ 824] 00:28:58.635 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:28:58.635 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:28:58.635 lat (usec) : 500=10.87%, 750=39.84%, 1000=2.52% 00:28:58.635 lat (msec) : 2=3.72%, 50=42.96%, >=2000=0.10% 00:28:58.635 cpu : usr=0.03%, sys=0.04%, ctx=995, majf=0, minf=1 00:28:58.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:58.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.635 issued rwts: total=482,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:58.635 00:28:58.635 Run status group 0 (all jobs): 00:28:58.635 READ: bw=32.1KiB/s (32.9kB/s), 32.1KiB/s-32.1KiB/s (32.9kB/s-32.9kB/s), io=1928KiB (1974kB), run=60006-60006msec 00:28:58.635 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60006-60006msec 00:28:58.635 00:28:58.635 Disk stats (read/write): 00:28:58.635 nvme0n1: ios=578/512, merge=0/0, ticks=17840/284, in_queue=18124, util=99.57% 00:28:58.635 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:58.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:58.635 nvmf hotplug test: fio successful as expected 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.635 rmmod nvme_tcp 00:28:58.635 rmmod nvme_fabrics 00:28:58.635 rmmod nvme_keyring 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 3239687 ']' 00:28:58.635 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 3239687 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3239687 ']' 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3239687 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3239687 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3239687' 00:28:58.636 killing process with pid 3239687 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3239687 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3239687 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.636 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.207 00:28:59.207 real 1m15.602s 00:28:59.207 user 4m36.662s 00:28:59.207 sys 0m7.693s 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:59.207 ************************************ 00:28:59.207 END TEST nvmf_initiator_timeout 00:28:59.207 ************************************ 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.207 15:46:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.344 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:07.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:07.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:07.345 Found net devices under 0000:31:00.0: cvl_0_0 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:07.345 Found net devices under 0000:31:00.1: cvl_0_1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.345 ************************************ 00:29:07.345 START TEST nvmf_perf_adq 00:29:07.345 ************************************ 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:07.345 * Looking for test storage... 00:29:07.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:07.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.345 --rc genhtml_branch_coverage=1 00:29:07.345 --rc genhtml_function_coverage=1 00:29:07.345 --rc genhtml_legend=1 00:29:07.345 --rc geninfo_all_blocks=1 00:29:07.345 --rc geninfo_unexecuted_blocks=1 00:29:07.345 00:29:07.345 ' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:07.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.345 --rc genhtml_branch_coverage=1 00:29:07.345 --rc genhtml_function_coverage=1 00:29:07.345 --rc genhtml_legend=1 00:29:07.345 --rc geninfo_all_blocks=1 00:29:07.345 --rc geninfo_unexecuted_blocks=1 00:29:07.345 00:29:07.345 ' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:07.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.345 --rc genhtml_branch_coverage=1 00:29:07.345 --rc genhtml_function_coverage=1 00:29:07.345 --rc genhtml_legend=1 00:29:07.345 --rc geninfo_all_blocks=1 00:29:07.345 --rc geninfo_unexecuted_blocks=1 00:29:07.345 00:29:07.345 ' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:07.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.345 --rc genhtml_branch_coverage=1 00:29:07.345 --rc genhtml_function_coverage=1 00:29:07.345 --rc genhtml_legend=1 00:29:07.345 --rc geninfo_all_blocks=1 00:29:07.345 --rc geninfo_unexecuted_blocks=1 00:29:07.345 00:29:07.345 ' 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.345 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.346 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.928 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.928 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.928 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.928 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.928 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.928 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.928 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:13.929 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:13.929 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:13.929 Found net devices under 0000:31:00.0: cvl_0_0 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:13.929 Found net devices under 0000:31:00.1: cvl_0_1 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:13.929 15:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:15.844 15:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:17.765 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:23.052 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:23.053 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:23.053 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:23.053 Found net devices under 0000:31:00.0: cvl_0_0 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:23.053 Found net devices under 0000:31:00.1: cvl_0_1 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:29:23.053 00:29:23.053 --- 10.0.0.2 ping statistics --- 00:29:23.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.053 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:29:23.053 00:29:23.053 --- 10.0.0.1 ping statistics --- 00:29:23.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.053 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:23.053 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3262016 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3262016 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3262016 ']' 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.314 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:23.314 [2024-10-01 15:47:02.617580] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:29:23.314 [2024-10-01 15:47:02.617643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.314 [2024-10-01 15:47:02.659737] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:23.314 [2024-10-01 15:47:02.709828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.314 [2024-10-01 15:47:02.757737] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.314 [2024-10-01 15:47:02.757792] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.314 [2024-10-01 15:47:02.757801] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.314 [2024-10-01 15:47:02.757807] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.314 [2024-10-01 15:47:02.757814] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.314 [2024-10-01 15:47:02.757974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.314 [2024-10-01 15:47:02.758022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.314 [2024-10-01 15:47:02.758174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.314 [2024-10-01 15:47:02.758176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 [2024-10-01 15:47:03.646071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 Malloc1 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.258 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.258 [2024-10-01 15:47:03.711804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.519 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.519 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3262353 00:29:24.519 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:24.519 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:26.434 "tick_rate": 2400000000, 00:29:26.434 "poll_groups": [ 00:29:26.434 { 00:29:26.434 "name": "nvmf_tgt_poll_group_000", 00:29:26.434 "admin_qpairs": 1, 00:29:26.434 "io_qpairs": 1, 00:29:26.434 "current_admin_qpairs": 1, 00:29:26.434 "current_io_qpairs": 1, 00:29:26.434 "pending_bdev_io": 0, 00:29:26.434 "completed_nvme_io": 17567, 00:29:26.434 "transports": [ 00:29:26.434 { 00:29:26.434 "trtype": "TCP" 00:29:26.434 } 00:29:26.434 ] 00:29:26.434 }, 00:29:26.434 { 00:29:26.434 "name": "nvmf_tgt_poll_group_001", 00:29:26.434 "admin_qpairs": 0, 00:29:26.434 "io_qpairs": 1, 00:29:26.434 "current_admin_qpairs": 0, 00:29:26.434 "current_io_qpairs": 1, 00:29:26.434 "pending_bdev_io": 0, 00:29:26.434 "completed_nvme_io": 17488, 00:29:26.434 "transports": [ 00:29:26.434 { 00:29:26.434 "trtype": "TCP" 00:29:26.434 } 00:29:26.434 ] 00:29:26.434 }, 00:29:26.434 { 00:29:26.434 "name": "nvmf_tgt_poll_group_002", 00:29:26.434 "admin_qpairs": 0, 00:29:26.434 "io_qpairs": 1, 00:29:26.434 "current_admin_qpairs": 0, 00:29:26.434 "current_io_qpairs": 1, 00:29:26.434 "pending_bdev_io": 0, 00:29:26.434 "completed_nvme_io": 16872, 00:29:26.434 "transports": [ 00:29:26.434 { 00:29:26.434 "trtype": "TCP" 00:29:26.434 } 00:29:26.434 ] 00:29:26.434 }, 00:29:26.434 { 00:29:26.434 "name": "nvmf_tgt_poll_group_003", 00:29:26.434 "admin_qpairs": 0, 00:29:26.434 "io_qpairs": 1, 00:29:26.434 "current_admin_qpairs": 0, 00:29:26.434 "current_io_qpairs": 1, 00:29:26.434 "pending_bdev_io": 0, 00:29:26.434 "completed_nvme_io": 16888, 00:29:26.434 "transports": [ 00:29:26.434 { 00:29:26.434 "trtype": "TCP" 00:29:26.434 } 00:29:26.434 ] 00:29:26.434 } 00:29:26.434 ] 00:29:26.434 }' 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:26.434 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3262353 00:29:34.567 Initializing NVMe Controllers 00:29:34.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:34.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:34.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:34.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:34.567 Initialization complete. Launching workers. 00:29:34.567 ======================================================== 00:29:34.567 Latency(us) 00:29:34.567 Device Information : IOPS MiB/s Average min max 00:29:34.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12527.50 48.94 5109.91 1364.30 9909.63 00:29:34.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13426.70 52.45 4776.19 1371.45 44172.62 00:29:34.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13089.20 51.13 4889.38 1260.88 13988.85 00:29:34.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13275.40 51.86 4820.25 1299.71 14055.50 00:29:34.567 ======================================================== 00:29:34.567 Total : 52318.79 204.37 4895.60 1260.88 44172.62 00:29:34.567 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.567 rmmod nvme_tcp 00:29:34.567 rmmod nvme_fabrics 00:29:34.567 rmmod nvme_keyring 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3262016 ']' 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3262016 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3262016 ']' 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3262016 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.567 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262016 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262016' 00:29:34.828 killing process with pid 3262016 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3262016 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3262016 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.828 15:47:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.371 15:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.371 15:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:37.371 15:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:37.371 15:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:38.756 15:47:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:40.675 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:45.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:45.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:45.965 Found net devices under 0000:31:00.0: cvl_0_0 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.965 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:45.966 Found net devices under 0000:31:00.1: cvl_0_1 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:45.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:29:45.966 00:29:45.966 --- 10.0.0.2 ping statistics --- 00:29:45.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.966 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:29:45.966 00:29:45.966 --- 10.0.0.1 ping statistics --- 00:29:45.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.966 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:45.966 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:46.227 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:46.227 net.core.busy_poll = 1 00:29:46.227 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:46.227 net.core.busy_read = 1 00:29:46.227 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:46.227 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:46.227 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:46.227 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:46.227 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3266853 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3266853 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3266853 ']' 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.488 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:46.488 [2024-10-01 15:47:25.763216] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:29:46.489 [2024-10-01 15:47:25.763300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.489 [2024-10-01 15:47:25.806230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:46.489 [2024-10-01 15:47:25.854032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.489 [2024-10-01 15:47:25.901870] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.489 [2024-10-01 15:47:25.901930] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.489 [2024-10-01 15:47:25.901939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.489 [2024-10-01 15:47:25.901946] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.489 [2024-10-01 15:47:25.901952] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.489 [2024-10-01 15:47:25.902144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.489 [2024-10-01 15:47:25.902307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.489 [2024-10-01 15:47:25.902427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.489 [2024-10-01 15:47:25.902428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.432 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.433 [2024-10-01 15:47:26.787348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.433 Malloc1 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.433 [2024-10-01 15:47:26.853074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3267064 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:47.433 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:49.979 "tick_rate": 2400000000, 00:29:49.979 "poll_groups": [ 00:29:49.979 { 00:29:49.979 "name": "nvmf_tgt_poll_group_000", 00:29:49.979 "admin_qpairs": 1, 00:29:49.979 "io_qpairs": 1, 00:29:49.979 "current_admin_qpairs": 1, 00:29:49.979 "current_io_qpairs": 1, 00:29:49.979 "pending_bdev_io": 0, 00:29:49.979 "completed_nvme_io": 25583, 00:29:49.979 "transports": [ 00:29:49.979 { 00:29:49.979 "trtype": "TCP" 00:29:49.979 } 00:29:49.979 ] 00:29:49.979 }, 00:29:49.979 { 00:29:49.979 "name": "nvmf_tgt_poll_group_001", 00:29:49.979 "admin_qpairs": 0, 00:29:49.979 "io_qpairs": 3, 00:29:49.979 "current_admin_qpairs": 0, 00:29:49.979 "current_io_qpairs": 3, 00:29:49.979 "pending_bdev_io": 0, 00:29:49.979 "completed_nvme_io": 31131, 00:29:49.979 "transports": [ 00:29:49.979 { 00:29:49.979 "trtype": "TCP" 00:29:49.979 } 00:29:49.979 ] 00:29:49.979 }, 00:29:49.979 { 00:29:49.979 "name": "nvmf_tgt_poll_group_002", 00:29:49.979 "admin_qpairs": 0, 00:29:49.979 "io_qpairs": 0, 00:29:49.979 "current_admin_qpairs": 0, 00:29:49.979 "current_io_qpairs": 0, 00:29:49.979 "pending_bdev_io": 0, 00:29:49.979 "completed_nvme_io": 0, 00:29:49.979 "transports": [ 00:29:49.979 { 00:29:49.979 "trtype": "TCP" 00:29:49.979 } 00:29:49.979 ] 00:29:49.979 }, 00:29:49.979 { 00:29:49.979 "name": "nvmf_tgt_poll_group_003", 00:29:49.979 "admin_qpairs": 0, 00:29:49.979 "io_qpairs": 0, 00:29:49.979 "current_admin_qpairs": 0, 00:29:49.979 "current_io_qpairs": 0, 00:29:49.979 "pending_bdev_io": 0, 00:29:49.979 "completed_nvme_io": 0, 00:29:49.979 "transports": [ 00:29:49.979 { 00:29:49.979 "trtype": "TCP" 00:29:49.979 } 00:29:49.979 ] 00:29:49.979 } 00:29:49.979 ] 00:29:49.979 }' 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:49.979 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3267064 00:29:58.116 Initializing NVMe Controllers 00:29:58.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:58.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:58.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:58.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:58.116 Initialization complete. Launching workers. 00:29:58.116 ======================================================== 00:29:58.116 Latency(us) 00:29:58.116 Device Information : IOPS MiB/s Average min max 00:29:58.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 17107.71 66.83 3741.38 896.39 45193.92 00:29:58.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9510.05 37.15 6729.25 1148.56 56778.43 00:29:58.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6096.37 23.81 10522.55 1209.16 59779.52 00:29:58.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5530.97 21.61 11581.98 1309.34 56825.71 00:29:58.116 ======================================================== 00:29:58.116 Total : 38245.09 149.39 6699.18 896.39 59779.52 00:29:58.116 00:29:58.116 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:58.116 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:58.116 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:58.116 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.116 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:58.116 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.116 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.117 rmmod nvme_tcp 00:29:58.117 rmmod nvme_fabrics 00:29:58.117 rmmod nvme_keyring 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3266853 ']' 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3266853 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3266853 ']' 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3266853 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3266853 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3266853' 00:29:58.117 killing process with pid 3266853 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3266853 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3266853 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.117 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:30:01.421 00:30:01.421 real 0m54.647s 00:30:01.421 user 2m49.862s 00:30:01.421 sys 0m11.810s 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:01.421 ************************************ 00:30:01.421 END TEST nvmf_perf_adq 00:30:01.421 ************************************ 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:01.421 ************************************ 00:30:01.421 START TEST nvmf_shutdown 00:30:01.421 ************************************ 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:01.421 * Looking for test storage... 00:30:01.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.421 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:01.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.422 --rc genhtml_branch_coverage=1 00:30:01.422 --rc genhtml_function_coverage=1 00:30:01.422 --rc genhtml_legend=1 00:30:01.422 --rc geninfo_all_blocks=1 00:30:01.422 --rc geninfo_unexecuted_blocks=1 00:30:01.422 00:30:01.422 ' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:01.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.422 --rc genhtml_branch_coverage=1 00:30:01.422 --rc genhtml_function_coverage=1 00:30:01.422 --rc genhtml_legend=1 00:30:01.422 --rc geninfo_all_blocks=1 00:30:01.422 --rc geninfo_unexecuted_blocks=1 00:30:01.422 00:30:01.422 ' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:01.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.422 --rc genhtml_branch_coverage=1 00:30:01.422 --rc genhtml_function_coverage=1 00:30:01.422 --rc genhtml_legend=1 00:30:01.422 --rc geninfo_all_blocks=1 00:30:01.422 --rc geninfo_unexecuted_blocks=1 00:30:01.422 00:30:01.422 ' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:01.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.422 --rc genhtml_branch_coverage=1 00:30:01.422 --rc genhtml_function_coverage=1 00:30:01.422 --rc genhtml_legend=1 00:30:01.422 --rc geninfo_all_blocks=1 00:30:01.422 --rc geninfo_unexecuted_blocks=1 00:30:01.422 00:30:01.422 ' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:01.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:01.422 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:01.423 ************************************ 00:30:01.423 START TEST nvmf_shutdown_tc1 00:30:01.423 ************************************ 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.423 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.569 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:09.570 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:09.570 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:09.570 Found net devices under 0000:31:00.0: cvl_0_0 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:09.570 Found net devices under 0000:31:00.1: cvl_0_1 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:30:09.570 00:30:09.570 --- 10.0.0.2 ping statistics --- 00:30:09.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.570 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:30:09.570 00:30:09.570 --- 10.0.0.1 ping statistics --- 00:30:09.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.570 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=3273588 00:30:09.570 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 3273588 00:30:09.571 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:09.571 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3273588 ']' 00:30:09.571 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.571 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:09.571 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.571 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:09.571 15:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.571 [2024-10-01 15:47:48.631094] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:09.571 [2024-10-01 15:47:48.631170] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.571 [2024-10-01 15:47:48.672809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:09.571 [2024-10-01 15:47:48.722131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.571 [2024-10-01 15:47:48.770109] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.571 [2024-10-01 15:47:48.770164] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.571 [2024-10-01 15:47:48.770172] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.571 [2024-10-01 15:47:48.770179] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.571 [2024-10-01 15:47:48.770185] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.571 [2024-10-01 15:47:48.770343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.571 [2024-10-01 15:47:48.770499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.571 [2024-10-01 15:47:48.770655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.571 [2024-10-01 15:47:48.770658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.143 [2024-10-01 15:47:49.508331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.143 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.144 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.404 Malloc1 00:30:10.404 [2024-10-01 15:47:49.625825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.404 Malloc2 00:30:10.404 Malloc3 00:30:10.404 Malloc4 00:30:10.404 Malloc5 00:30:10.404 Malloc6 00:30:10.665 Malloc7 00:30:10.665 Malloc8 00:30:10.665 Malloc9 00:30:10.665 Malloc10 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3273842 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3273842 /var/tmp/bdevperf.sock 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3273842 ']' 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:10.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.665 { 00:30:10.665 "params": { 00:30:10.665 "name": "Nvme$subsystem", 00:30:10.665 "trtype": "$TEST_TRANSPORT", 00:30:10.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.665 "adrfam": "ipv4", 00:30:10.665 "trsvcid": "$NVMF_PORT", 00:30:10.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.665 "hdgst": ${hdgst:-false}, 00:30:10.665 "ddgst": ${ddgst:-false} 00:30:10.665 }, 00:30:10.665 "method": "bdev_nvme_attach_controller" 00:30:10.665 } 00:30:10.665 EOF 00:30:10.665 )") 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.665 { 00:30:10.665 "params": { 00:30:10.665 "name": "Nvme$subsystem", 00:30:10.665 "trtype": "$TEST_TRANSPORT", 00:30:10.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.665 "adrfam": "ipv4", 00:30:10.665 "trsvcid": "$NVMF_PORT", 00:30:10.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.665 "hdgst": ${hdgst:-false}, 00:30:10.665 "ddgst": ${ddgst:-false} 00:30:10.665 }, 00:30:10.665 "method": "bdev_nvme_attach_controller" 00:30:10.665 } 00:30:10.665 EOF 00:30:10.665 )") 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.665 { 00:30:10.665 "params": { 00:30:10.665 "name": "Nvme$subsystem", 00:30:10.665 "trtype": "$TEST_TRANSPORT", 00:30:10.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.665 "adrfam": "ipv4", 00:30:10.665 "trsvcid": "$NVMF_PORT", 00:30:10.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.665 "hdgst": ${hdgst:-false}, 00:30:10.665 "ddgst": ${ddgst:-false} 00:30:10.665 }, 00:30:10.665 "method": "bdev_nvme_attach_controller" 00:30:10.665 } 00:30:10.665 EOF 00:30:10.665 )") 00:30:10.665 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.927 { 00:30:10.927 "params": { 00:30:10.927 "name": "Nvme$subsystem", 00:30:10.927 "trtype": "$TEST_TRANSPORT", 00:30:10.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.927 "adrfam": "ipv4", 00:30:10.927 "trsvcid": "$NVMF_PORT", 00:30:10.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.927 "hdgst": ${hdgst:-false}, 00:30:10.927 "ddgst": ${ddgst:-false} 00:30:10.927 }, 00:30:10.927 "method": "bdev_nvme_attach_controller" 00:30:10.927 } 00:30:10.927 EOF 00:30:10.927 )") 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.927 { 00:30:10.927 "params": { 00:30:10.927 "name": "Nvme$subsystem", 00:30:10.927 "trtype": "$TEST_TRANSPORT", 00:30:10.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.927 "adrfam": "ipv4", 00:30:10.927 "trsvcid": "$NVMF_PORT", 00:30:10.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.927 "hdgst": ${hdgst:-false}, 00:30:10.927 "ddgst": ${ddgst:-false} 00:30:10.927 }, 00:30:10.927 "method": "bdev_nvme_attach_controller" 00:30:10.927 } 00:30:10.927 EOF 00:30:10.927 )") 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.927 { 00:30:10.927 "params": { 00:30:10.927 "name": "Nvme$subsystem", 00:30:10.927 "trtype": "$TEST_TRANSPORT", 00:30:10.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.927 "adrfam": "ipv4", 00:30:10.927 "trsvcid": "$NVMF_PORT", 00:30:10.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.927 "hdgst": ${hdgst:-false}, 00:30:10.927 "ddgst": ${ddgst:-false} 00:30:10.927 }, 00:30:10.927 "method": "bdev_nvme_attach_controller" 00:30:10.927 } 00:30:10.927 EOF 00:30:10.927 )") 00:30:10.927 [2024-10-01 15:47:50.143384] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.927 [2024-10-01 15:47:50.143456] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.927 { 00:30:10.927 "params": { 00:30:10.927 "name": "Nvme$subsystem", 00:30:10.927 "trtype": "$TEST_TRANSPORT", 00:30:10.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.927 "adrfam": "ipv4", 00:30:10.927 "trsvcid": "$NVMF_PORT", 00:30:10.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.927 "hdgst": ${hdgst:-false}, 00:30:10.927 "ddgst": ${ddgst:-false} 00:30:10.927 }, 00:30:10.927 "method": "bdev_nvme_attach_controller" 00:30:10.927 } 00:30:10.927 EOF 00:30:10.927 )") 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.927 { 00:30:10.927 "params": { 00:30:10.927 "name": "Nvme$subsystem", 00:30:10.927 "trtype": "$TEST_TRANSPORT", 00:30:10.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.927 "adrfam": "ipv4", 00:30:10.927 "trsvcid": "$NVMF_PORT", 00:30:10.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.927 "hdgst": ${hdgst:-false}, 00:30:10.927 "ddgst": ${ddgst:-false} 00:30:10.927 }, 00:30:10.927 "method": "bdev_nvme_attach_controller" 00:30:10.927 } 00:30:10.927 EOF 00:30:10.927 )") 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.927 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.928 { 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme$subsystem", 00:30:10.928 "trtype": "$TEST_TRANSPORT", 00:30:10.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "$NVMF_PORT", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.928 "hdgst": ${hdgst:-false}, 00:30:10.928 "ddgst": ${ddgst:-false} 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 } 00:30:10.928 EOF 00:30:10.928 )") 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:10.928 { 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme$subsystem", 00:30:10.928 "trtype": "$TEST_TRANSPORT", 00:30:10.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "$NVMF_PORT", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.928 "hdgst": ${hdgst:-false}, 00:30:10.928 "ddgst": ${ddgst:-false} 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 } 00:30:10.928 EOF 00:30:10.928 )") 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:10.928 [2024-10-01 15:47:50.180610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:30:10.928 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme1", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme2", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme3", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme4", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme5", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme6", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme7", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme8", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme9", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 },{ 00:30:10.928 "params": { 00:30:10.928 "name": "Nvme10", 00:30:10.928 "trtype": "tcp", 00:30:10.928 "traddr": "10.0.0.2", 00:30:10.928 "adrfam": "ipv4", 00:30:10.928 "trsvcid": "4420", 00:30:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:10.928 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:10.928 "hdgst": false, 00:30:10.928 "ddgst": false 00:30:10.928 }, 00:30:10.928 "method": "bdev_nvme_attach_controller" 00:30:10.928 }' 00:30:10.928 [2024-10-01 15:47:50.231221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.928 [2024-10-01 15:47:50.279043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3273842 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:12.312 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:13.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3273842 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3273588 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.254 { 00:30:13.254 "params": { 00:30:13.254 "name": "Nvme$subsystem", 00:30:13.254 "trtype": "$TEST_TRANSPORT", 00:30:13.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.254 "adrfam": "ipv4", 00:30:13.254 "trsvcid": "$NVMF_PORT", 00:30:13.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.254 "hdgst": ${hdgst:-false}, 00:30:13.254 "ddgst": ${ddgst:-false} 00:30:13.254 }, 00:30:13.254 "method": "bdev_nvme_attach_controller" 00:30:13.254 } 00:30:13.254 EOF 00:30:13.254 )") 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.254 { 00:30:13.254 "params": { 00:30:13.254 "name": "Nvme$subsystem", 00:30:13.254 "trtype": "$TEST_TRANSPORT", 00:30:13.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.254 "adrfam": "ipv4", 00:30:13.254 "trsvcid": "$NVMF_PORT", 00:30:13.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.254 "hdgst": ${hdgst:-false}, 00:30:13.254 "ddgst": ${ddgst:-false} 00:30:13.254 }, 00:30:13.254 "method": "bdev_nvme_attach_controller" 00:30:13.254 } 00:30:13.254 EOF 00:30:13.254 )") 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.254 { 00:30:13.254 "params": { 00:30:13.254 "name": "Nvme$subsystem", 00:30:13.254 "trtype": "$TEST_TRANSPORT", 00:30:13.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.254 "adrfam": "ipv4", 00:30:13.254 "trsvcid": "$NVMF_PORT", 00:30:13.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.254 "hdgst": ${hdgst:-false}, 00:30:13.254 "ddgst": ${ddgst:-false} 00:30:13.254 }, 00:30:13.254 "method": "bdev_nvme_attach_controller" 00:30:13.254 } 00:30:13.254 EOF 00:30:13.254 )") 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.254 { 00:30:13.254 "params": { 00:30:13.254 "name": "Nvme$subsystem", 00:30:13.254 "trtype": "$TEST_TRANSPORT", 00:30:13.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.254 "adrfam": "ipv4", 00:30:13.254 "trsvcid": "$NVMF_PORT", 00:30:13.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.254 "hdgst": ${hdgst:-false}, 00:30:13.254 "ddgst": ${ddgst:-false} 00:30:13.254 }, 00:30:13.254 "method": "bdev_nvme_attach_controller" 00:30:13.254 } 00:30:13.254 EOF 00:30:13.254 )") 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.254 { 00:30:13.254 "params": { 00:30:13.254 "name": "Nvme$subsystem", 00:30:13.254 "trtype": "$TEST_TRANSPORT", 00:30:13.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.254 "adrfam": "ipv4", 00:30:13.254 "trsvcid": "$NVMF_PORT", 00:30:13.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.254 "hdgst": ${hdgst:-false}, 00:30:13.254 "ddgst": ${ddgst:-false} 00:30:13.254 }, 00:30:13.254 "method": "bdev_nvme_attach_controller" 00:30:13.254 } 00:30:13.254 EOF 00:30:13.254 )") 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.254 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.254 { 00:30:13.254 "params": { 00:30:13.254 "name": "Nvme$subsystem", 00:30:13.254 "trtype": "$TEST_TRANSPORT", 00:30:13.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "$NVMF_PORT", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.255 "hdgst": ${hdgst:-false}, 00:30:13.255 "ddgst": ${ddgst:-false} 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 } 00:30:13.255 EOF 00:30:13.255 )") 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.255 [2024-10-01 15:47:52.628709] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:13.255 [2024-10-01 15:47:52.628764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274494 ] 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.255 { 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme$subsystem", 00:30:13.255 "trtype": "$TEST_TRANSPORT", 00:30:13.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "$NVMF_PORT", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.255 "hdgst": ${hdgst:-false}, 00:30:13.255 "ddgst": ${ddgst:-false} 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 } 00:30:13.255 EOF 00:30:13.255 )") 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.255 { 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme$subsystem", 00:30:13.255 "trtype": "$TEST_TRANSPORT", 00:30:13.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "$NVMF_PORT", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.255 "hdgst": ${hdgst:-false}, 00:30:13.255 "ddgst": ${ddgst:-false} 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 } 00:30:13.255 EOF 00:30:13.255 )") 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.255 { 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme$subsystem", 00:30:13.255 "trtype": "$TEST_TRANSPORT", 00:30:13.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "$NVMF_PORT", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.255 "hdgst": ${hdgst:-false}, 00:30:13.255 "ddgst": ${ddgst:-false} 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 } 00:30:13.255 EOF 00:30:13.255 )") 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:13.255 { 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme$subsystem", 00:30:13.255 "trtype": "$TEST_TRANSPORT", 00:30:13.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "$NVMF_PORT", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.255 "hdgst": ${hdgst:-false}, 00:30:13.255 "ddgst": ${ddgst:-false} 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 } 00:30:13.255 EOF 00:30:13.255 )") 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:13.255 [2024-10-01 15:47:52.661468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:30:13.255 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme1", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme2", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme3", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme4", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme5", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme6", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme7", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme8", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme9", 00:30:13.255 "trtype": "tcp", 00:30:13.255 "traddr": "10.0.0.2", 00:30:13.255 "adrfam": "ipv4", 00:30:13.255 "trsvcid": "4420", 00:30:13.255 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:13.255 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:13.255 "hdgst": false, 00:30:13.255 "ddgst": false 00:30:13.255 }, 00:30:13.255 "method": "bdev_nvme_attach_controller" 00:30:13.255 },{ 00:30:13.255 "params": { 00:30:13.255 "name": "Nvme10", 00:30:13.255 "trtype": "tcp", 00:30:13.256 "traddr": "10.0.0.2", 00:30:13.256 "adrfam": "ipv4", 00:30:13.256 "trsvcid": "4420", 00:30:13.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:13.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:13.256 "hdgst": false, 00:30:13.256 "ddgst": false 00:30:13.256 }, 00:30:13.256 "method": "bdev_nvme_attach_controller" 00:30:13.256 }' 00:30:13.515 [2024-10-01 15:47:52.711182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.515 [2024-10-01 15:47:52.742316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.897 Running I/O for 1 seconds... 00:30:15.839 1864.00 IOPS, 116.50 MiB/s 00:30:15.839 Latency(us) 00:30:15.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.839 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme1n1 : 1.13 227.12 14.20 0.00 0.00 278935.04 18350.08 251658.24 00:30:15.839 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme2n1 : 1.12 228.42 14.28 0.00 0.00 272593.28 16493.23 248162.99 00:30:15.839 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme3n1 : 1.11 231.07 14.44 0.00 0.00 264124.80 19770.03 242920.11 00:30:15.839 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme4n1 : 1.13 282.79 17.67 0.00 0.00 212186.03 13107.20 260396.37 00:30:15.839 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme5n1 : 1.10 232.06 14.50 0.00 0.00 253665.07 19333.12 237677.23 00:30:15.839 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme6n1 : 1.11 231.30 14.46 0.00 0.00 250231.04 15073.28 267386.88 00:30:15.839 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme7n1 : 1.12 234.28 14.64 0.00 0.00 241581.56 3877.55 228939.09 00:30:15.839 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme8n1 : 1.16 279.50 17.47 0.00 0.00 200525.03 6772.05 263891.63 00:30:15.839 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme9n1 : 1.16 225.48 14.09 0.00 0.00 243073.37 1829.55 265639.25 00:30:15.839 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.839 Verification LBA range: start 0x0 length 0x400 00:30:15.839 Nvme10n1 : 1.18 271.80 16.99 0.00 0.00 199288.06 10321.92 249910.61 00:30:15.839 =================================================================================================================== 00:30:15.839 Total : 2443.83 152.74 0.00 0.00 238955.43 1829.55 267386.88 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.839 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.839 rmmod nvme_tcp 00:30:16.099 rmmod nvme_fabrics 00:30:16.099 rmmod nvme_keyring 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 3273588 ']' 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 3273588 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3273588 ']' 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3273588 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3273588 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3273588' 00:30:16.099 killing process with pid 3273588 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3273588 00:30:16.099 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3273588 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.360 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.273 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.273 00:30:18.273 real 0m16.994s 00:30:18.273 user 0m33.347s 00:30:18.273 sys 0m7.133s 00:30:18.273 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:18.273 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:18.273 ************************************ 00:30:18.273 END TEST nvmf_shutdown_tc1 00:30:18.273 ************************************ 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:18.536 ************************************ 00:30:18.536 START TEST nvmf_shutdown_tc2 00:30:18.536 ************************************ 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:18.536 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:18.536 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:18.536 Found net devices under 0000:31:00.0: cvl_0_0 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:18.536 Found net devices under 0000:31:00.1: cvl_0_1 00:30:18.536 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.537 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.799 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.799 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:30:18.799 00:30:18.799 --- 10.0.0.2 ping statistics --- 00:30:18.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.799 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:30:18.799 00:30:18.799 --- 10.0.0.1 ping statistics --- 00:30:18.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.799 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3275609 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3275609 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3275609 ']' 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:18.799 15:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.799 [2024-10-01 15:47:58.239289] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:18.799 [2024-10-01 15:47:58.239338] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.060 [2024-10-01 15:47:58.272841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:19.060 [2024-10-01 15:47:58.318087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:19.060 [2024-10-01 15:47:58.348067] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.060 [2024-10-01 15:47:58.348098] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.060 [2024-10-01 15:47:58.348103] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.060 [2024-10-01 15:47:58.348108] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.060 [2024-10-01 15:47:58.348112] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.060 [2024-10-01 15:47:58.348218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.060 [2024-10-01 15:47:58.348452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:19.060 [2024-10-01 15:47:58.348465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:19.060 [2024-10-01 15:47:58.348469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.630 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.630 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:19.630 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:19.630 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:19.630 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.891 [2024-10-01 15:47:59.099311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:19.891 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.892 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.892 Malloc1 00:30:19.892 [2024-10-01 15:47:59.197974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.892 Malloc2 00:30:19.892 Malloc3 00:30:19.892 Malloc4 00:30:19.892 Malloc5 00:30:20.153 Malloc6 00:30:20.153 Malloc7 00:30:20.153 Malloc8 00:30:20.153 Malloc9 00:30:20.153 Malloc10 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3275922 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3275922 /var/tmp/bdevperf.sock 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3275922 ']' 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:20.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.153 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.153 { 00:30:20.153 "params": { 00:30:20.153 "name": "Nvme$subsystem", 00:30:20.153 "trtype": "$TEST_TRANSPORT", 00:30:20.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.153 "adrfam": "ipv4", 00:30:20.153 "trsvcid": "$NVMF_PORT", 00:30:20.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.153 "hdgst": ${hdgst:-false}, 00:30:20.153 "ddgst": ${ddgst:-false} 00:30:20.153 }, 00:30:20.153 "method": "bdev_nvme_attach_controller" 00:30:20.153 } 00:30:20.154 EOF 00:30:20.154 )") 00:30:20.154 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.154 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.154 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.154 { 00:30:20.154 "params": { 00:30:20.154 "name": "Nvme$subsystem", 00:30:20.154 "trtype": "$TEST_TRANSPORT", 00:30:20.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.154 "adrfam": "ipv4", 00:30:20.154 "trsvcid": "$NVMF_PORT", 00:30:20.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.154 "hdgst": ${hdgst:-false}, 00:30:20.154 "ddgst": ${ddgst:-false} 00:30:20.154 }, 00:30:20.154 "method": "bdev_nvme_attach_controller" 00:30:20.154 } 00:30:20.154 EOF 00:30:20.154 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.415 "trsvcid": "$NVMF_PORT", 00:30:20.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.415 "hdgst": ${hdgst:-false}, 00:30:20.415 "ddgst": ${ddgst:-false} 00:30:20.415 }, 00:30:20.415 "method": "bdev_nvme_attach_controller" 00:30:20.415 } 00:30:20.415 EOF 00:30:20.415 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.415 "trsvcid": "$NVMF_PORT", 00:30:20.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.415 "hdgst": ${hdgst:-false}, 00:30:20.415 "ddgst": ${ddgst:-false} 00:30:20.415 }, 00:30:20.415 "method": "bdev_nvme_attach_controller" 00:30:20.415 } 00:30:20.415 EOF 00:30:20.415 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.415 "trsvcid": "$NVMF_PORT", 00:30:20.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.415 "hdgst": ${hdgst:-false}, 00:30:20.415 "ddgst": ${ddgst:-false} 00:30:20.415 }, 00:30:20.415 "method": "bdev_nvme_attach_controller" 00:30:20.415 } 00:30:20.415 EOF 00:30:20.415 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.415 "trsvcid": "$NVMF_PORT", 00:30:20.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.415 "hdgst": ${hdgst:-false}, 00:30:20.415 "ddgst": ${ddgst:-false} 00:30:20.415 }, 00:30:20.415 "method": "bdev_nvme_attach_controller" 00:30:20.415 } 00:30:20.415 EOF 00:30:20.415 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 [2024-10-01 15:47:59.644373] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:20.415 [2024-10-01 15:47:59.644427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275922 ] 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.415 "trsvcid": "$NVMF_PORT", 00:30:20.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.415 "hdgst": ${hdgst:-false}, 00:30:20.415 "ddgst": ${ddgst:-false} 00:30:20.415 }, 00:30:20.415 "method": "bdev_nvme_attach_controller" 00:30:20.415 } 00:30:20.415 EOF 00:30:20.415 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.415 "trsvcid": "$NVMF_PORT", 00:30:20.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.415 "hdgst": ${hdgst:-false}, 00:30:20.415 "ddgst": ${ddgst:-false} 00:30:20.415 }, 00:30:20.415 "method": "bdev_nvme_attach_controller" 00:30:20.415 } 00:30:20.415 EOF 00:30:20.415 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.415 "trsvcid": "$NVMF_PORT", 00:30:20.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.415 "hdgst": ${hdgst:-false}, 00:30:20.415 "ddgst": ${ddgst:-false} 00:30:20.415 }, 00:30:20.415 "method": "bdev_nvme_attach_controller" 00:30:20.415 } 00:30:20.415 EOF 00:30:20.415 )") 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:20.415 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:20.415 { 00:30:20.415 "params": { 00:30:20.415 "name": "Nvme$subsystem", 00:30:20.415 "trtype": "$TEST_TRANSPORT", 00:30:20.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.415 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "$NVMF_PORT", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.416 "hdgst": ${hdgst:-false}, 00:30:20.416 "ddgst": ${ddgst:-false} 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 } 00:30:20.416 EOF 00:30:20.416 )") 00:30:20.416 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:20.416 [2024-10-01 15:47:59.675508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:20.416 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:30:20.416 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:30:20.416 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme1", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme2", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme3", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme4", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme5", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme6", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme7", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme8", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme9", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 },{ 00:30:20.416 "params": { 00:30:20.416 "name": "Nvme10", 00:30:20.416 "trtype": "tcp", 00:30:20.416 "traddr": "10.0.0.2", 00:30:20.416 "adrfam": "ipv4", 00:30:20.416 "trsvcid": "4420", 00:30:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:20.416 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:20.416 "hdgst": false, 00:30:20.416 "ddgst": false 00:30:20.416 }, 00:30:20.416 "method": "bdev_nvme_attach_controller" 00:30:20.416 }' 00:30:20.416 [2024-10-01 15:47:59.723936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.416 [2024-10-01 15:47:59.755309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.802 Running I/O for 10 seconds... 00:30:21.802 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.802 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:21.802 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:21.802 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.802 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:22.064 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=71 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 71 -ge 100 ']' 00:30:22.324 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:22.585 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:22.585 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:22.585 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:22.586 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:22.586 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.586 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3275922 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3275922 ']' 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3275922 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:22.586 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3275922 00:30:22.848 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:22.848 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:22.848 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3275922' 00:30:22.848 killing process with pid 3275922 00:30:22.848 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3275922 00:30:22.848 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3275922 00:30:22.848 Received shutdown signal, test time was about 0.971459 seconds 00:30:22.848 00:30:22.848 Latency(us) 00:30:22.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.848 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme1n1 : 0.96 270.21 16.89 0.00 0.00 233527.03 3904.85 248162.99 00:30:22.848 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme2n1 : 0.96 266.38 16.65 0.00 0.00 232694.61 20206.93 225443.84 00:30:22.848 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme3n1 : 0.94 271.50 16.97 0.00 0.00 223215.04 9502.72 232434.35 00:30:22.848 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme4n1 : 0.97 263.76 16.49 0.00 0.00 225611.09 14090.24 242920.11 00:30:22.848 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme5n1 : 0.95 202.64 12.66 0.00 0.00 286759.54 20753.07 276125.01 00:30:22.848 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme6n1 : 0.94 204.28 12.77 0.00 0.00 278074.60 19988.48 251658.24 00:30:22.848 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme7n1 : 0.96 267.68 16.73 0.00 0.00 207789.65 12397.23 260396.37 00:30:22.848 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme8n1 : 0.97 265.25 16.58 0.00 0.00 205303.47 16711.68 263891.63 00:30:22.848 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme9n1 : 0.93 206.43 12.90 0.00 0.00 255725.23 17367.04 244667.73 00:30:22.848 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.848 Verification LBA range: start 0x0 length 0x400 00:30:22.848 Nvme10n1 : 0.95 201.60 12.60 0.00 0.00 256878.08 18786.99 272629.76 00:30:22.848 =================================================================================================================== 00:30:22.849 Total : 2419.73 151.23 0.00 0.00 237351.03 3904.85 276125.01 00:30:22.849 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3275609 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.235 rmmod nvme_tcp 00:30:24.235 rmmod nvme_fabrics 00:30:24.235 rmmod nvme_keyring 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 3275609 ']' 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 3275609 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3275609 ']' 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3275609 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3275609 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3275609' 00:30:24.235 killing process with pid 3275609 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3275609 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3275609 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.235 15:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.785 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.785 00:30:26.785 real 0m7.960s 00:30:26.785 user 0m24.155s 00:30:26.785 sys 0m1.278s 00:30:26.785 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:26.785 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.785 ************************************ 00:30:26.786 END TEST nvmf_shutdown_tc2 00:30:26.786 ************************************ 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:26.786 ************************************ 00:30:26.786 START TEST nvmf_shutdown_tc3 00:30:26.786 ************************************ 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:26.786 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:26.786 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:26.786 Found net devices under 0000:31:00.0: cvl_0_0 00:30:26.786 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:26.787 Found net devices under 0000:31:00.1: cvl_0_1 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.787 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:30:26.787 00:30:26.787 --- 10.0.0.2 ping statistics --- 00:30:26.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.787 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:30:26.787 00:30:26.787 --- 10.0.0.1 ping statistics --- 00:30:26.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.787 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3277272 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3277272 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3277272 ']' 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.787 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.048 [2024-10-01 15:48:06.301531] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:27.048 [2024-10-01 15:48:06.301595] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.048 [2024-10-01 15:48:06.344331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:27.048 [2024-10-01 15:48:06.391354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.048 [2024-10-01 15:48:06.426133] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.048 [2024-10-01 15:48:06.426171] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.048 [2024-10-01 15:48:06.426177] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.048 [2024-10-01 15:48:06.426182] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.048 [2024-10-01 15:48:06.426186] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.048 [2024-10-01 15:48:06.426311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.048 [2024-10-01 15:48:06.426475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.048 [2024-10-01 15:48:06.426591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.048 [2024-10-01 15:48:06.426593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.993 [2024-10-01 15:48:07.148248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.993 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.993 Malloc1 00:30:27.993 [2024-10-01 15:48:07.246829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.993 Malloc2 00:30:27.993 Malloc3 00:30:27.993 Malloc4 00:30:27.993 Malloc5 00:30:27.993 Malloc6 00:30:28.255 Malloc7 00:30:28.255 Malloc8 00:30:28.255 Malloc9 00:30:28.255 Malloc10 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3277627 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3277627 /var/tmp/bdevperf.sock 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3277627 ']' 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.255 { 00:30:28.255 "params": { 00:30:28.255 "name": "Nvme$subsystem", 00:30:28.255 "trtype": "$TEST_TRANSPORT", 00:30:28.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.255 "adrfam": "ipv4", 00:30:28.255 "trsvcid": "$NVMF_PORT", 00:30:28.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.255 "hdgst": ${hdgst:-false}, 00:30:28.255 "ddgst": ${ddgst:-false} 00:30:28.255 }, 00:30:28.255 "method": "bdev_nvme_attach_controller" 00:30:28.255 } 00:30:28.255 EOF 00:30:28.255 )") 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.255 { 00:30:28.255 "params": { 00:30:28.255 "name": "Nvme$subsystem", 00:30:28.255 "trtype": "$TEST_TRANSPORT", 00:30:28.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.255 "adrfam": "ipv4", 00:30:28.255 "trsvcid": "$NVMF_PORT", 00:30:28.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.255 "hdgst": ${hdgst:-false}, 00:30:28.255 "ddgst": ${ddgst:-false} 00:30:28.255 }, 00:30:28.255 "method": "bdev_nvme_attach_controller" 00:30:28.255 } 00:30:28.255 EOF 00:30:28.255 )") 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.255 { 00:30:28.255 "params": { 00:30:28.255 "name": "Nvme$subsystem", 00:30:28.255 "trtype": "$TEST_TRANSPORT", 00:30:28.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.255 "adrfam": "ipv4", 00:30:28.255 "trsvcid": "$NVMF_PORT", 00:30:28.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.255 "hdgst": ${hdgst:-false}, 00:30:28.255 "ddgst": ${ddgst:-false} 00:30:28.255 }, 00:30:28.255 "method": "bdev_nvme_attach_controller" 00:30:28.255 } 00:30:28.255 EOF 00:30:28.255 )") 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.255 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.255 { 00:30:28.255 "params": { 00:30:28.255 "name": "Nvme$subsystem", 00:30:28.255 "trtype": "$TEST_TRANSPORT", 00:30:28.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.255 "adrfam": "ipv4", 00:30:28.255 "trsvcid": "$NVMF_PORT", 00:30:28.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.255 "hdgst": ${hdgst:-false}, 00:30:28.255 "ddgst": ${ddgst:-false} 00:30:28.255 }, 00:30:28.255 "method": "bdev_nvme_attach_controller" 00:30:28.255 } 00:30:28.255 EOF 00:30:28.255 )") 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.256 { 00:30:28.256 "params": { 00:30:28.256 "name": "Nvme$subsystem", 00:30:28.256 "trtype": "$TEST_TRANSPORT", 00:30:28.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.256 "adrfam": "ipv4", 00:30:28.256 "trsvcid": "$NVMF_PORT", 00:30:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.256 "hdgst": ${hdgst:-false}, 00:30:28.256 "ddgst": ${ddgst:-false} 00:30:28.256 }, 00:30:28.256 "method": "bdev_nvme_attach_controller" 00:30:28.256 } 00:30:28.256 EOF 00:30:28.256 )") 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.256 { 00:30:28.256 "params": { 00:30:28.256 "name": "Nvme$subsystem", 00:30:28.256 "trtype": "$TEST_TRANSPORT", 00:30:28.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.256 "adrfam": "ipv4", 00:30:28.256 "trsvcid": "$NVMF_PORT", 00:30:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.256 "hdgst": ${hdgst:-false}, 00:30:28.256 "ddgst": ${ddgst:-false} 00:30:28.256 }, 00:30:28.256 "method": "bdev_nvme_attach_controller" 00:30:28.256 } 00:30:28.256 EOF 00:30:28.256 )") 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.256 [2024-10-01 15:48:07.693105] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:28.256 [2024-10-01 15:48:07.693157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277627 ] 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.256 { 00:30:28.256 "params": { 00:30:28.256 "name": "Nvme$subsystem", 00:30:28.256 "trtype": "$TEST_TRANSPORT", 00:30:28.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.256 "adrfam": "ipv4", 00:30:28.256 "trsvcid": "$NVMF_PORT", 00:30:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.256 "hdgst": ${hdgst:-false}, 00:30:28.256 "ddgst": ${ddgst:-false} 00:30:28.256 }, 00:30:28.256 "method": "bdev_nvme_attach_controller" 00:30:28.256 } 00:30:28.256 EOF 00:30:28.256 )") 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.256 { 00:30:28.256 "params": { 00:30:28.256 "name": "Nvme$subsystem", 00:30:28.256 "trtype": "$TEST_TRANSPORT", 00:30:28.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.256 "adrfam": "ipv4", 00:30:28.256 "trsvcid": "$NVMF_PORT", 00:30:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.256 "hdgst": ${hdgst:-false}, 00:30:28.256 "ddgst": ${ddgst:-false} 00:30:28.256 }, 00:30:28.256 "method": "bdev_nvme_attach_controller" 00:30:28.256 } 00:30:28.256 EOF 00:30:28.256 )") 00:30:28.256 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.517 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.517 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.517 { 00:30:28.517 "params": { 00:30:28.517 "name": "Nvme$subsystem", 00:30:28.517 "trtype": "$TEST_TRANSPORT", 00:30:28.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.517 "adrfam": "ipv4", 00:30:28.517 "trsvcid": "$NVMF_PORT", 00:30:28.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.517 "hdgst": ${hdgst:-false}, 00:30:28.517 "ddgst": ${ddgst:-false} 00:30:28.517 }, 00:30:28.517 "method": "bdev_nvme_attach_controller" 00:30:28.517 } 00:30:28.517 EOF 00:30:28.517 )") 00:30:28.517 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.517 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:28.517 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:28.517 { 00:30:28.517 "params": { 00:30:28.517 "name": "Nvme$subsystem", 00:30:28.517 "trtype": "$TEST_TRANSPORT", 00:30:28.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.517 "adrfam": "ipv4", 00:30:28.517 "trsvcid": "$NVMF_PORT", 00:30:28.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.518 "hdgst": ${hdgst:-false}, 00:30:28.518 "ddgst": ${ddgst:-false} 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 } 00:30:28.518 EOF 00:30:28.518 )") 00:30:28.518 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:28.518 [2024-10-01 15:48:07.724176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:28.518 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:30:28.518 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:30:28.518 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme1", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme2", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme3", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme4", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme5", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme6", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme7", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme8", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme9", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 },{ 00:30:28.518 "params": { 00:30:28.518 "name": "Nvme10", 00:30:28.518 "trtype": "tcp", 00:30:28.518 "traddr": "10.0.0.2", 00:30:28.518 "adrfam": "ipv4", 00:30:28.518 "trsvcid": "4420", 00:30:28.518 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:28.518 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:28.518 "hdgst": false, 00:30:28.518 "ddgst": false 00:30:28.518 }, 00:30:28.518 "method": "bdev_nvme_attach_controller" 00:30:28.518 }' 00:30:28.518 [2024-10-01 15:48:07.772491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.518 [2024-10-01 15:48:07.803695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.036 Running I/O for 10 seconds... 00:30:30.036 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:30.036 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:30.036 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:30.036 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.036 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:30.297 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:30.559 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:30.820 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:30.820 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:30.821 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:30.821 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.821 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.821 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3277272 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3277272 ']' 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3277272 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277272 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277272' 00:30:31.096 killing process with pid 3277272 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3277272 00:30:31.096 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3277272 00:30:31.096 [2024-10-01 15:48:10.364195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.096 [2024-10-01 15:48:10.364587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.364697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5170 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.365995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.366233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7d40 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.367596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5660 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.368999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.097 [2024-10-01 15:48:10.369074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.369335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5b30 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.370683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6020 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.098 [2024-10-01 15:48:10.371441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.371701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f64f0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.372754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f69e0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.099 [2024-10-01 15:48:10.373709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.373806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6eb0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.374770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7850 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.374790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7850 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.380251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a8680 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.380418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0700 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.380530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82e10 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.380622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e610 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.380716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f2230 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.380812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc3d0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.380911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.380969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.380983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f26a0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.381011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f846a0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.381096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a89a0 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.381188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.100 [2024-10-01 15:48:10.381251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.381258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f84b00 is same with the state(6) to be set 00:30:31.100 [2024-10-01 15:48:10.382553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.100 [2024-10-01 15:48:10.382862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.100 [2024-10-01 15:48:10.382870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.382880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.382888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.382903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.382911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.382921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.382928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.382938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.382945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.382955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.382963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.382973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.382981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.382991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.101 [2024-10-01 15:48:10.383768] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2387260 was disconnected and freed. reset controller. 00:30:31.101 [2024-10-01 15:48:10.383798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.383989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.383998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.384016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.384033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.384051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.384069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.384087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.384104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.101 [2024-10-01 15:48:10.384122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.101 [2024-10-01 15:48:10.384131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.384940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.384992] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23886a0 was disconnected and freed. reset controller. 00:30:31.102 [2024-10-01 15:48:10.388197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:31.102 [2024-10-01 15:48:10.388227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:31.102 [2024-10-01 15:48:10.388245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2230 (9): Bad file descriptor 00:30:31.102 [2024-10-01 15:48:10.388267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f26a0 (9): Bad file descriptor 00:30:31.102 [2024-10-01 15:48:10.389208] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389288] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389328] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389367] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389414] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389474] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389507] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389547] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.102 [2024-10-01 15:48:10.389917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.102 [2024-10-01 15:48:10.389937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f26a0 with addr=10.0.0.2, port=4420 00:30:31.102 [2024-10-01 15:48:10.389946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f26a0 is same with the state(6) to be set 00:30:31.102 [2024-10-01 15:48:10.390291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.102 [2024-10-01 15:48:10.390302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f2230 with addr=10.0.0.2, port=4420 00:30:31.102 [2024-10-01 15:48:10.390310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f2230 is same with the state(6) to be set 00:30:31.102 [2024-10-01 15:48:10.390358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.102 [2024-10-01 15:48:10.390679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.102 [2024-10-01 15:48:10.390689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.390986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.390996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2428e00 is same with the state(6) to be set 00:30:31.103 [2024-10-01 15:48:10.391573] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2428e00 was disconnected and freed. reset controller. 00:30:31.103 [2024-10-01 15:48:10.391593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.391986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.391995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.392003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.392015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.392023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.392034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.103 [2024-10-01 15:48:10.392041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.103 [2024-10-01 15:48:10.392051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.392741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.392749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242a2f0 is same with the state(6) to be set 00:30:31.104 [2024-10-01 15:48:10.392808] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x242a2f0 was disconnected and freed. reset controller. 00:30:31.104 [2024-10-01 15:48:10.392871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f26a0 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.392884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2230 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.392903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a8680 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.392919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0700 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.392933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f82e10 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.392951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e610 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.392970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc3d0 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.392989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f846a0 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.393007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a89a0 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.393023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f84b00 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.395477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:31.104 [2024-10-01 15:48:10.395495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:31.104 [2024-10-01 15:48:10.395521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:31.104 [2024-10-01 15:48:10.395531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:31.104 [2024-10-01 15:48:10.395543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:31.104 [2024-10-01 15:48:10.395557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:31.104 [2024-10-01 15:48:10.395566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:31.104 [2024-10-01 15:48:10.395575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:31.104 [2024-10-01 15:48:10.395651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.104 [2024-10-01 15:48:10.395661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.104 [2024-10-01 15:48:10.396029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.104 [2024-10-01 15:48:10.396043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b0700 with addr=10.0.0.2, port=4420 00:30:31.104 [2024-10-01 15:48:10.396052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0700 is same with the state(6) to be set 00:30:31.104 [2024-10-01 15:48:10.396393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.104 [2024-10-01 15:48:10.396405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a8680 with addr=10.0.0.2, port=4420 00:30:31.104 [2024-10-01 15:48:10.396413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a8680 is same with the state(6) to be set 00:30:31.104 [2024-10-01 15:48:10.396962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0700 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.396976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a8680 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.397027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:31.104 [2024-10-01 15:48:10.397035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:31.104 [2024-10-01 15:48:10.397043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:31.104 [2024-10-01 15:48:10.397056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:31.104 [2024-10-01 15:48:10.397062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:31.104 [2024-10-01 15:48:10.397069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:31.104 [2024-10-01 15:48:10.397118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.104 [2024-10-01 15:48:10.397126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.104 [2024-10-01 15:48:10.399338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:31.104 [2024-10-01 15:48:10.399393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:31.104 [2024-10-01 15:48:10.399744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.104 [2024-10-01 15:48:10.399757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f2230 with addr=10.0.0.2, port=4420 00:30:31.104 [2024-10-01 15:48:10.399766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f2230 is same with the state(6) to be set 00:30:31.104 [2024-10-01 15:48:10.400125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.104 [2024-10-01 15:48:10.400138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f26a0 with addr=10.0.0.2, port=4420 00:30:31.104 [2024-10-01 15:48:10.400146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f26a0 is same with the state(6) to be set 00:30:31.104 [2024-10-01 15:48:10.400156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2230 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.400201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f26a0 (9): Bad file descriptor 00:30:31.104 [2024-10-01 15:48:10.400212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:31.104 [2024-10-01 15:48:10.400219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:31.104 [2024-10-01 15:48:10.400234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:31.104 [2024-10-01 15:48:10.400272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.104 [2024-10-01 15:48:10.400280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:31.104 [2024-10-01 15:48:10.400287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:31.104 [2024-10-01 15:48:10.400295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:31.104 [2024-10-01 15:48:10.400334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.104 [2024-10-01 15:48:10.403014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.403045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.403054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.403065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.104 [2024-10-01 15:48:10.403072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.104 [2024-10-01 15:48:10.403082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.403989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.403999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.404171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.404180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188f40 is same with the state(6) to be set 00:30:31.105 [2024-10-01 15:48:10.405460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.105 [2024-10-01 15:48:10.405680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.105 [2024-10-01 15:48:10.405691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.405984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.405992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.406661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.406669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a280 is same with the state(6) to be set 00:30:31.106 [2024-10-01 15:48:10.408001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.106 [2024-10-01 15:48:10.408224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.106 [2024-10-01 15:48:10.408233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.408987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.408996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.409158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.409167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24278a0 is same with the state(6) to be set 00:30:31.107 [2024-10-01 15:48:10.410468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.107 [2024-10-01 15:48:10.410853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.107 [2024-10-01 15:48:10.410862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.410872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.410879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.410890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.410902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.410911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.410919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.410930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.410938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.410947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.410955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.410966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.410973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.410982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.410991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.411639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.411648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242b870 is same with the state(6) to be set 00:30:31.108 [2024-10-01 15:48:10.412925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.412940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.412951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.412959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.412969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.412977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.412990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.412998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.108 [2024-10-01 15:48:10.413349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.108 [2024-10-01 15:48:10.413356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.413985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.413995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.414003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.414013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.414020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.414030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.414039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.414049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.414057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.414066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.414074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.414083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385ce0 is same with the state(6) to be set 00:30:31.109 [2024-10-01 15:48:10.415609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.415986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.415996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.109 [2024-10-01 15:48:10.416167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.109 [2024-10-01 15:48:10.416178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.110 [2024-10-01 15:48:10.416792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.110 [2024-10-01 15:48:10.416802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2389c20 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.418919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:31.110 [2024-10-01 15:48:10.418941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:31.110 [2024-10-01 15:48:10.418952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:31.110 [2024-10-01 15:48:10.418962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:31.110 [2024-10-01 15:48:10.419048] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:31.110 [2024-10-01 15:48:10.419061] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:31.110 [2024-10-01 15:48:10.419135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:31.110 task offset: 24576 on job bdev=Nvme8n1 fails 00:30:31.110 00:30:31.110 Latency(us) 00:30:31.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.110 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme1n1 ended in about 0.97 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme1n1 : 0.97 197.42 12.34 65.81 0.00 240482.99 18677.76 227191.47 00:30:31.110 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme2n1 ended in about 0.98 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme2n1 : 0.98 131.28 8.20 65.64 0.00 315305.81 21408.43 283115.52 00:30:31.110 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme3n1 ended in about 0.98 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme3n1 : 0.98 196.41 12.28 65.47 0.00 232270.72 16711.68 244667.73 00:30:31.110 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme4n1 ended in about 0.96 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme4n1 : 0.96 199.70 12.48 66.57 0.00 223446.08 5352.11 246415.36 00:30:31.110 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme5n1 ended in about 0.96 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme5n1 : 0.96 199.45 12.47 66.48 0.00 218975.31 5980.16 248162.99 00:30:31.110 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme6n1 ended in about 0.98 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme6n1 : 0.98 144.90 9.06 65.31 0.00 271662.10 9775.79 260396.37 00:30:31.110 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme7n1 ended in about 0.98 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme7n1 : 0.98 195.43 12.21 65.14 0.00 214444.59 21189.97 246415.36 00:30:31.110 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme8n1 ended in about 0.95 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme8n1 : 0.95 201.32 12.58 67.11 0.00 202533.31 3331.41 251658.24 00:30:31.110 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme9n1 ended in about 0.95 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme9n1 : 0.95 201.08 12.57 67.03 0.00 198192.64 6580.91 246415.36 00:30:31.110 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.110 Job: Nvme10n1 ended in about 0.99 seconds with error 00:30:31.110 Verification LBA range: start 0x0 length 0x400 00:30:31.110 Nvme10n1 : 0.99 129.93 8.12 64.96 0.00 268212.34 23483.73 253405.87 00:30:31.110 =================================================================================================================== 00:30:31.110 Total : 1796.91 112.31 659.51 0.00 234998.47 3331.41 283115.52 00:30:31.110 [2024-10-01 15:48:10.445277] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:31.110 [2024-10-01 15:48:10.445309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:31.110 [2024-10-01 15:48:10.445656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.445675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f84b00 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.445686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f84b00 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.445903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.445914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f846a0 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.445923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f846a0 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.446283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.446293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f82e10 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.446301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82e10 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.446658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.446667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a89a0 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.446675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a89a0 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.448239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:31.110 [2024-10-01 15:48:10.448256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:31.110 [2024-10-01 15:48:10.448265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:31.110 [2024-10-01 15:48:10.448274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:31.110 [2024-10-01 15:48:10.448679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.448692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8e610 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.448700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e610 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.449005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.449016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dc3d0 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.449024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc3d0 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.449037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f84b00 (9): Bad file descriptor 00:30:31.110 [2024-10-01 15:48:10.449049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f846a0 (9): Bad file descriptor 00:30:31.110 [2024-10-01 15:48:10.449058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f82e10 (9): Bad file descriptor 00:30:31.110 [2024-10-01 15:48:10.449073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a89a0 (9): Bad file descriptor 00:30:31.110 [2024-10-01 15:48:10.449105] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:31.110 [2024-10-01 15:48:10.449116] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:31.110 [2024-10-01 15:48:10.449127] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:31.110 [2024-10-01 15:48:10.449138] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:31.110 [2024-10-01 15:48:10.449741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.449758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a8680 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.449766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a8680 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.450081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.450092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b0700 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.450100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0700 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.450516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.450527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f2230 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.450535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f2230 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.450864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.110 [2024-10-01 15:48:10.450874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f26a0 with addr=10.0.0.2, port=4420 00:30:31.110 [2024-10-01 15:48:10.450881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f26a0 is same with the state(6) to be set 00:30:31.110 [2024-10-01 15:48:10.450891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e610 (9): Bad file descriptor 00:30:31.110 [2024-10-01 15:48:10.450905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc3d0 (9): Bad file descriptor 00:30:31.110 [2024-10-01 15:48:10.450914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:31.110 [2024-10-01 15:48:10.450921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:31.110 [2024-10-01 15:48:10.450930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:31.110 [2024-10-01 15:48:10.450943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.450949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.450956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:31.111 [2024-10-01 15:48:10.450968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.450974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.450982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:31.111 [2024-10-01 15:48:10.450993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.451000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.451010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:31.111 [2024-10-01 15:48:10.451084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a8680 (9): Bad file descriptor 00:30:31.111 [2024-10-01 15:48:10.451123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0700 (9): Bad file descriptor 00:30:31.111 [2024-10-01 15:48:10.451133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2230 (9): Bad file descriptor 00:30:31.111 [2024-10-01 15:48:10.451142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f26a0 (9): Bad file descriptor 00:30:31.111 [2024-10-01 15:48:10.451150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.451157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.451164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:31.111 [2024-10-01 15:48:10.451174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.451181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.451187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:31.111 [2024-10-01 15:48:10.451216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.451236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.451243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:31.111 [2024-10-01 15:48:10.451252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.451259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.451266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:31.111 [2024-10-01 15:48:10.451277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.451285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.451292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:31.111 [2024-10-01 15:48:10.451302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:31.111 [2024-10-01 15:48:10.451308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:31.111 [2024-10-01 15:48:10.451315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:31.111 [2024-10-01 15:48:10.451347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.111 [2024-10-01 15:48:10.451370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.372 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3277627 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3277627 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3277627 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.352 rmmod nvme_tcp 00:30:32.352 rmmod nvme_fabrics 00:30:32.352 rmmod nvme_keyring 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 3277272 ']' 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 3277272 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3277272 ']' 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3277272 00:30:32.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3277272) - No such process 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3277272 is not found' 00:30:32.352 Process with pid 3277272 is not found 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.352 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:34.896 00:30:34.896 real 0m7.964s 00:30:34.896 user 0m19.832s 00:30:34.896 sys 0m1.273s 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.896 ************************************ 00:30:34.896 END TEST nvmf_shutdown_tc3 00:30:34.896 ************************************ 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:34.896 ************************************ 00:30:34.896 START TEST nvmf_shutdown_tc4 00:30:34.896 ************************************ 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.896 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:34.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:34.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:34.897 Found net devices under 0000:31:00.0: cvl_0_0 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:34.897 Found net devices under 0000:31:00.1: cvl_0_1 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.897 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:30:34.897 00:30:34.897 --- 10.0.0.2 ping statistics --- 00:30:34.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.897 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:30:34.897 00:30:34.897 --- 10.0.0.1 ping statistics --- 00:30:34.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.897 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.897 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=3279539 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 3279539 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3279539 ']' 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:34.898 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:34.898 [2024-10-01 15:48:14.347674] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:34.898 [2024-10-01 15:48:14.347738] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.159 [2024-10-01 15:48:14.385674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:35.159 [2024-10-01 15:48:14.433429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.159 [2024-10-01 15:48:14.463587] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.159 [2024-10-01 15:48:14.463622] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.159 [2024-10-01 15:48:14.463627] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.159 [2024-10-01 15:48:14.463632] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.159 [2024-10-01 15:48:14.463636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.159 [2024-10-01 15:48:14.463744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.159 [2024-10-01 15:48:14.463912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.159 [2024-10-01 15:48:14.464048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:35.159 [2024-10-01 15:48:14.464171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.731 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:35.731 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:30:35.731 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:35.731 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:35.732 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:35.732 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.732 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.732 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.732 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:35.732 [2024-10-01 15:48:15.182200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.993 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:35.993 Malloc1 00:30:35.993 [2024-10-01 15:48:15.284990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.993 Malloc2 00:30:35.993 Malloc3 00:30:35.993 Malloc4 00:30:35.993 Malloc5 00:30:36.253 Malloc6 00:30:36.253 Malloc7 00:30:36.253 Malloc8 00:30:36.253 Malloc9 00:30:36.253 Malloc10 00:30:36.253 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.253 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:36.253 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:36.253 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:36.253 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3279804 00:30:36.253 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:36.253 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:36.513 [2024-10-01 15:48:15.755814] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3279539 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3279539 ']' 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3279539 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279539 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279539' 00:30:41.806 killing process with pid 3279539 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3279539 00:30:41.806 15:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3279539 00:30:41.806 [2024-10-01 15:48:20.765786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ae80 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.765833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ae80 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.765843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ae80 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.765849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ae80 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.765856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ae80 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.765862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ae80 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.765868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ae80 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0b350 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0b350 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0b350 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0b350 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0b820 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0a9b0 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0a9b0 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0a9b0 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0a9b0 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0a9b0 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.766435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0a9b0 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.768425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795a50 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.768445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795a50 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.768453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795a50 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.768460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795a50 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.768467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795a50 is same with the state(6) to be set 00:30:41.806 [2024-10-01 15:48:20.768474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795a50 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795f40 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.768940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796430 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795580 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795580 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795580 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795580 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795580 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796ca0 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796ca0 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796ca0 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1797680 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1797680 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1797680 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.769986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17967d0 is same with the state(6) to be set 00:30:41.807 [2024-10-01 15:48:20.770001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17967d0 is same with the state(6) to be set 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 [2024-10-01 15:48:20.770722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.807 starting I/O failed: -6 00:30:41.807 starting I/O failed: -6 00:30:41.807 starting I/O failed: -6 00:30:41.807 starting I/O failed: -6 00:30:41.807 starting I/O failed: -6 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 starting I/O failed: -6 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.807 starting I/O failed: -6 00:30:41.807 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 [2024-10-01 15:48:20.773403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.808 NVMe io qpair process completion error 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 [2024-10-01 15:48:20.774670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 Write completed with error (sct=0, sc=8) 00:30:41.808 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 [2024-10-01 15:48:20.775602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.809 starting I/O failed: -6 00:30:41.809 starting I/O failed: -6 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 [2024-10-01 15:48:20.776536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 [2024-10-01 15:48:20.777978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.809 NVMe io qpair process completion error 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 starting I/O failed: -6 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.809 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 [2024-10-01 15:48:20.779133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 [2024-10-01 15:48:20.779976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 [2024-10-01 15:48:20.780901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.810 starting I/O failed: -6 00:30:41.810 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 [2024-10-01 15:48:20.784317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.811 NVMe io qpair process completion error 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 [2024-10-01 15:48:20.785588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.811 starting I/O failed: -6 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 Write completed with error (sct=0, sc=8) 00:30:41.811 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 [2024-10-01 15:48:20.786540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 [2024-10-01 15:48:20.787459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.812 starting I/O failed: -6 00:30:41.812 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 [2024-10-01 15:48:20.789090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.813 NVMe io qpair process completion error 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 [2024-10-01 15:48:20.790225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.813 starting I/O failed: -6 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 [2024-10-01 15:48:20.791183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.813 Write completed with error (sct=0, sc=8) 00:30:41.813 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 [2024-10-01 15:48:20.792109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 [2024-10-01 15:48:20.795909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.814 NVMe io qpair process completion error 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 [2024-10-01 15:48:20.797160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.814 starting I/O failed: -6 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.814 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 [2024-10-01 15:48:20.797992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.815 starting I/O failed: -6 00:30:41.815 starting I/O failed: -6 00:30:41.815 starting I/O failed: -6 00:30:41.815 starting I/O failed: -6 00:30:41.815 starting I/O failed: -6 00:30:41.815 starting I/O failed: -6 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 [2024-10-01 15:48:20.799354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.815 Write completed with error (sct=0, sc=8) 00:30:41.815 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 [2024-10-01 15:48:20.800819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.816 NVMe io qpair process completion error 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 [2024-10-01 15:48:20.802086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 [2024-10-01 15:48:20.802914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.816 starting I/O failed: -6 00:30:41.816 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 [2024-10-01 15:48:20.803843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.817 starting I/O failed: -6 00:30:41.817 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 [2024-10-01 15:48:20.806965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.818 NVMe io qpair process completion error 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 [2024-10-01 15:48:20.808049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.818 starting I/O failed: -6 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 [2024-10-01 15:48:20.808885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.818 starting I/O failed: -6 00:30:41.818 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 [2024-10-01 15:48:20.809809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 [2024-10-01 15:48:20.811509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.819 NVMe io qpair process completion error 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.819 Write completed with error (sct=0, sc=8) 00:30:41.819 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 starting I/O failed: -6 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.820 starting I/O failed: -6 00:30:41.820 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 [2024-10-01 15:48:20.815533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.821 NVMe io qpair process completion error 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.821 starting I/O failed: -6 00:30:41.821 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 [2024-10-01 15:48:20.817773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.822 starting I/O failed: -6 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 starting I/O failed: -6 00:30:41.822 [2024-10-01 15:48:20.821302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.822 NVMe io qpair process completion error 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.822 Write completed with error (sct=0, sc=8) 00:30:41.823 Write completed with error (sct=0, sc=8) 00:30:41.823 Write completed with error (sct=0, sc=8) 00:30:41.823 Write completed with error (sct=0, sc=8) 00:30:41.823 Write completed with error (sct=0, sc=8) 00:30:41.823 Initializing NVMe Controllers 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:41.823 Controller IO queue size 128, less than required. 00:30:41.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:41.823 Initialization complete. Launching workers. 00:30:41.823 ======================================================== 00:30:41.823 Latency(us) 00:30:41.823 Device Information : IOPS MiB/s Average min max 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1904.43 81.83 67218.90 532.56 137382.00 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1860.12 79.93 68482.04 502.98 154557.50 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1869.02 80.31 67804.00 754.95 122286.39 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1839.13 79.03 68929.36 622.00 124935.95 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1804.78 77.55 70288.57 724.55 119143.09 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1864.57 80.12 68057.53 682.30 120705.12 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1890.43 81.23 67184.79 634.79 122656.18 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1868.81 80.30 67983.88 666.91 130738.08 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1881.74 80.86 67556.63 620.42 134401.23 00:30:41.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1863.51 80.07 68229.11 619.37 137149.47 00:30:41.823 ======================================================== 00:30:41.823 Total : 18646.54 801.22 68161.48 502.98 154557.50 00:30:41.823 00:30:41.823 [2024-10-01 15:48:20.825669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d590 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa37d30 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3add0 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa36850 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53af0 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3afb0 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53cd0 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d3b0 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38060 is same with the state(6) to be set 00:30:41.823 [2024-10-01 15:48:20.825964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa36a30 is same with the state(6) to be set 00:30:41.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:41.823 15:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3279804 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3279804 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3279804 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.820 rmmod nvme_tcp 00:30:42.820 rmmod nvme_fabrics 00:30:42.820 rmmod nvme_keyring 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 3279539 ']' 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 3279539 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3279539 ']' 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3279539 00:30:42.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3279539) - No such process 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3279539 is not found' 00:30:42.820 Process with pid 3279539 is not found 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.820 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.730 15:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.730 00:30:44.730 real 0m10.278s 00:30:44.730 user 0m27.758s 00:30:44.730 sys 0m4.126s 00:30:44.730 15:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:44.730 15:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:44.730 ************************************ 00:30:44.730 END TEST nvmf_shutdown_tc4 00:30:44.730 ************************************ 00:30:44.990 15:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:44.990 00:30:44.990 real 0m43.773s 00:30:44.990 user 1m45.361s 00:30:44.990 sys 0m14.154s 00:30:44.990 15:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:44.990 15:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:44.990 ************************************ 00:30:44.990 END TEST nvmf_shutdown 00:30:44.990 ************************************ 00:30:44.990 15:48:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:44.990 00:30:44.990 real 19m44.595s 00:30:44.990 user 51m44.817s 00:30:44.990 sys 4m51.141s 00:30:44.990 15:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:44.990 15:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:44.990 ************************************ 00:30:44.990 END TEST nvmf_target_extra 00:30:44.990 ************************************ 00:30:44.990 15:48:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:44.990 15:48:24 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:44.990 15:48:24 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:44.990 15:48:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:44.990 ************************************ 00:30:44.991 START TEST nvmf_host 00:30:44.991 ************************************ 00:30:44.991 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:45.251 * Looking for test storage... 00:30:45.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.251 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:45.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.252 --rc genhtml_branch_coverage=1 00:30:45.252 --rc genhtml_function_coverage=1 00:30:45.252 --rc genhtml_legend=1 00:30:45.252 --rc geninfo_all_blocks=1 00:30:45.252 --rc geninfo_unexecuted_blocks=1 00:30:45.252 00:30:45.252 ' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:45.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.252 --rc genhtml_branch_coverage=1 00:30:45.252 --rc genhtml_function_coverage=1 00:30:45.252 --rc genhtml_legend=1 00:30:45.252 --rc geninfo_all_blocks=1 00:30:45.252 --rc geninfo_unexecuted_blocks=1 00:30:45.252 00:30:45.252 ' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:45.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.252 --rc genhtml_branch_coverage=1 00:30:45.252 --rc genhtml_function_coverage=1 00:30:45.252 --rc genhtml_legend=1 00:30:45.252 --rc geninfo_all_blocks=1 00:30:45.252 --rc geninfo_unexecuted_blocks=1 00:30:45.252 00:30:45.252 ' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:45.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.252 --rc genhtml_branch_coverage=1 00:30:45.252 --rc genhtml_function_coverage=1 00:30:45.252 --rc genhtml_legend=1 00:30:45.252 --rc geninfo_all_blocks=1 00:30:45.252 --rc geninfo_unexecuted_blocks=1 00:30:45.252 00:30:45.252 ' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:45.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.252 ************************************ 00:30:45.252 START TEST nvmf_multicontroller 00:30:45.252 ************************************ 00:30:45.252 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:45.513 * Looking for test storage... 00:30:45.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:45.513 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:45.513 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:30:45.513 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:45.513 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:45.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.514 --rc genhtml_branch_coverage=1 00:30:45.514 --rc genhtml_function_coverage=1 00:30:45.514 --rc genhtml_legend=1 00:30:45.514 --rc geninfo_all_blocks=1 00:30:45.514 --rc geninfo_unexecuted_blocks=1 00:30:45.514 00:30:45.514 ' 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:45.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.514 --rc genhtml_branch_coverage=1 00:30:45.514 --rc genhtml_function_coverage=1 00:30:45.514 --rc genhtml_legend=1 00:30:45.514 --rc geninfo_all_blocks=1 00:30:45.514 --rc geninfo_unexecuted_blocks=1 00:30:45.514 00:30:45.514 ' 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:45.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.514 --rc genhtml_branch_coverage=1 00:30:45.514 --rc genhtml_function_coverage=1 00:30:45.514 --rc genhtml_legend=1 00:30:45.514 --rc geninfo_all_blocks=1 00:30:45.514 --rc geninfo_unexecuted_blocks=1 00:30:45.514 00:30:45.514 ' 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:45.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.514 --rc genhtml_branch_coverage=1 00:30:45.514 --rc genhtml_function_coverage=1 00:30:45.514 --rc genhtml_legend=1 00:30:45.514 --rc geninfo_all_blocks=1 00:30:45.514 --rc geninfo_unexecuted_blocks=1 00:30:45.514 00:30:45.514 ' 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.514 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:45.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.515 15:48:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:53.658 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:53.658 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:53.658 Found net devices under 0000:31:00.0: cvl_0_0 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:53.658 Found net devices under 0000:31:00.1: cvl_0_1 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.658 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:30:53.659 00:30:53.659 --- 10.0.0.2 ping statistics --- 00:30:53.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.659 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:30:53.659 00:30:53.659 --- 10.0.0.1 ping statistics --- 00:30:53.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.659 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=3285393 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 3285393 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3285393 ']' 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.659 15:48:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:53.659 [2024-10-01 15:48:32.647914] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:53.659 [2024-10-01 15:48:32.647982] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.659 [2024-10-01 15:48:32.690603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:53.659 [2024-10-01 15:48:32.741107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:53.659 [2024-10-01 15:48:32.788532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.659 [2024-10-01 15:48:32.788588] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.659 [2024-10-01 15:48:32.788596] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.659 [2024-10-01 15:48:32.788603] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.659 [2024-10-01 15:48:32.788610] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.659 [2024-10-01 15:48:32.788767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.659 [2024-10-01 15:48:32.788937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:53.659 [2024-10-01 15:48:32.788971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.233 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:54.233 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:54.233 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:54.233 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:54.233 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 [2024-10-01 15:48:33.536941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 Malloc0 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 [2024-10-01 15:48:33.611819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 [2024-10-01 15:48:33.623683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 Malloc1 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.234 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3285552 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3285552 /var/tmp/bdevperf.sock 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3285552 ']' 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:54.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:54.496 15:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.442 NVMe0n1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.442 1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.442 request: 00:30:55.442 { 00:30:55.442 "name": "NVMe0", 00:30:55.442 "trtype": "tcp", 00:30:55.442 "traddr": "10.0.0.2", 00:30:55.442 "adrfam": "ipv4", 00:30:55.442 "trsvcid": "4420", 00:30:55.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.442 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:55.442 "hostaddr": "10.0.0.1", 00:30:55.442 "prchk_reftag": false, 00:30:55.442 "prchk_guard": false, 00:30:55.442 "hdgst": false, 00:30:55.442 "ddgst": false, 00:30:55.442 "allow_unrecognized_csi": false, 00:30:55.442 "method": "bdev_nvme_attach_controller", 00:30:55.442 "req_id": 1 00:30:55.442 } 00:30:55.442 Got JSON-RPC error response 00:30:55.442 response: 00:30:55.442 { 00:30:55.442 "code": -114, 00:30:55.442 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:55.442 } 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.442 request: 00:30:55.442 { 00:30:55.442 "name": "NVMe0", 00:30:55.442 "trtype": "tcp", 00:30:55.442 "traddr": "10.0.0.2", 00:30:55.442 "adrfam": "ipv4", 00:30:55.442 "trsvcid": "4420", 00:30:55.442 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:55.442 "hostaddr": "10.0.0.1", 00:30:55.442 "prchk_reftag": false, 00:30:55.442 "prchk_guard": false, 00:30:55.442 "hdgst": false, 00:30:55.442 "ddgst": false, 00:30:55.442 "allow_unrecognized_csi": false, 00:30:55.442 "method": "bdev_nvme_attach_controller", 00:30:55.442 "req_id": 1 00:30:55.442 } 00:30:55.442 Got JSON-RPC error response 00:30:55.442 response: 00:30:55.442 { 00:30:55.442 "code": -114, 00:30:55.442 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:55.442 } 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.442 request: 00:30:55.442 { 00:30:55.442 "name": "NVMe0", 00:30:55.442 "trtype": "tcp", 00:30:55.442 "traddr": "10.0.0.2", 00:30:55.442 "adrfam": "ipv4", 00:30:55.442 "trsvcid": "4420", 00:30:55.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.442 "hostaddr": "10.0.0.1", 00:30:55.442 "prchk_reftag": false, 00:30:55.442 "prchk_guard": false, 00:30:55.442 "hdgst": false, 00:30:55.442 "ddgst": false, 00:30:55.442 "multipath": "disable", 00:30:55.442 "allow_unrecognized_csi": false, 00:30:55.442 "method": "bdev_nvme_attach_controller", 00:30:55.442 "req_id": 1 00:30:55.442 } 00:30:55.442 Got JSON-RPC error response 00:30:55.442 response: 00:30:55.442 { 00:30:55.442 "code": -114, 00:30:55.442 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:55.442 } 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.442 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.443 request: 00:30:55.443 { 00:30:55.443 "name": "NVMe0", 00:30:55.443 "trtype": "tcp", 00:30:55.443 "traddr": "10.0.0.2", 00:30:55.443 "adrfam": "ipv4", 00:30:55.443 "trsvcid": "4420", 00:30:55.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.443 "hostaddr": "10.0.0.1", 00:30:55.443 "prchk_reftag": false, 00:30:55.443 "prchk_guard": false, 00:30:55.443 "hdgst": false, 00:30:55.443 "ddgst": false, 00:30:55.443 "multipath": "failover", 00:30:55.443 "allow_unrecognized_csi": false, 00:30:55.443 "method": "bdev_nvme_attach_controller", 00:30:55.443 "req_id": 1 00:30:55.443 } 00:30:55.443 Got JSON-RPC error response 00:30:55.443 response: 00:30:55.443 { 00:30:55.443 "code": -114, 00:30:55.443 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:55.443 } 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.443 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.443 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.705 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.705 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:55.705 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.705 15:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.705 00:30:55.705 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.705 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.705 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:55.705 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.705 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:55.705 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.967 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:55.967 15:48:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:56.911 { 00:30:56.911 "results": [ 00:30:56.911 { 00:30:56.911 "job": "NVMe0n1", 00:30:56.911 "core_mask": "0x1", 00:30:56.911 "workload": "write", 00:30:56.911 "status": "finished", 00:30:56.911 "queue_depth": 128, 00:30:56.911 "io_size": 4096, 00:30:56.911 "runtime": 1.004259, 00:30:56.911 "iops": 29001.482685243547, 00:30:56.911 "mibps": 113.2870417392326, 00:30:56.911 "io_failed": 0, 00:30:56.911 "io_timeout": 0, 00:30:56.911 "avg_latency_us": 4405.4597731616595, 00:30:56.911 "min_latency_us": 2129.92, 00:30:56.911 "max_latency_us": 10540.373333333333 00:30:56.911 } 00:30:56.911 ], 00:30:56.911 "core_count": 1 00:30:56.911 } 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3285552 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3285552 ']' 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3285552 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.911 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3285552 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3285552' 00:30:57.172 killing process with pid 3285552 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3285552 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3285552 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:30:57.172 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:57.172 [2024-10-01 15:48:33.759513] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:57.172 [2024-10-01 15:48:33.759590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285552 ] 00:30:57.172 [2024-10-01 15:48:33.794323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:57.172 [2024-10-01 15:48:33.843768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.172 [2024-10-01 15:48:33.891290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.172 [2024-10-01 15:48:35.133137] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 5f767408-1c3d-40bb-ae94-ab9aced6a8a3 already exists 00:30:57.172 [2024-10-01 15:48:35.133183] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:5f767408-1c3d-40bb-ae94-ab9aced6a8a3 alias for bdev NVMe1n1 00:30:57.172 [2024-10-01 15:48:35.133193] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:57.172 Running I/O for 1 seconds... 00:30:57.172 28997.00 IOPS, 113.27 MiB/s 00:30:57.172 Latency(us) 00:30:57.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.172 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:57.172 NVMe0n1 : 1.00 29001.48 113.29 0.00 0.00 4405.46 2129.92 10540.37 00:30:57.172 =================================================================================================================== 00:30:57.172 Total : 29001.48 113.29 0.00 0.00 4405.46 2129.92 10540.37 00:30:57.172 Received shutdown signal, test time was about 1.000000 seconds 00:30:57.172 00:30:57.172 Latency(us) 00:30:57.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.172 =================================================================================================================== 00:30:57.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:57.172 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.172 rmmod nvme_tcp 00:30:57.172 rmmod nvme_fabrics 00:30:57.172 rmmod nvme_keyring 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 3285393 ']' 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 3285393 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3285393 ']' 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3285393 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:57.172 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3285393 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3285393' 00:30:57.433 killing process with pid 3285393 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3285393 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3285393 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.433 15:48:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.983 00:30:59.983 real 0m14.253s 00:30:59.983 user 0m17.238s 00:30:59.983 sys 0m6.789s 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.983 ************************************ 00:30:59.983 END TEST nvmf_multicontroller 00:30:59.983 ************************************ 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.983 ************************************ 00:30:59.983 START TEST nvmf_aer 00:30:59.983 ************************************ 00:30:59.983 15:48:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:59.983 * Looking for test storage... 00:30:59.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.983 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.984 --rc genhtml_branch_coverage=1 00:30:59.984 --rc genhtml_function_coverage=1 00:30:59.984 --rc genhtml_legend=1 00:30:59.984 --rc geninfo_all_blocks=1 00:30:59.984 --rc geninfo_unexecuted_blocks=1 00:30:59.984 00:30:59.984 ' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.984 --rc genhtml_branch_coverage=1 00:30:59.984 --rc genhtml_function_coverage=1 00:30:59.984 --rc genhtml_legend=1 00:30:59.984 --rc geninfo_all_blocks=1 00:30:59.984 --rc geninfo_unexecuted_blocks=1 00:30:59.984 00:30:59.984 ' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.984 --rc genhtml_branch_coverage=1 00:30:59.984 --rc genhtml_function_coverage=1 00:30:59.984 --rc genhtml_legend=1 00:30:59.984 --rc geninfo_all_blocks=1 00:30:59.984 --rc geninfo_unexecuted_blocks=1 00:30:59.984 00:30:59.984 ' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.984 --rc genhtml_branch_coverage=1 00:30:59.984 --rc genhtml_function_coverage=1 00:30:59.984 --rc genhtml_legend=1 00:30:59.984 --rc geninfo_all_blocks=1 00:30:59.984 --rc geninfo_unexecuted_blocks=1 00:30:59.984 00:30:59.984 ' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:59.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.984 15:48:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.133 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:08.133 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:08.134 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:08.134 Found net devices under 0000:31:00.0: cvl_0_0 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:08.134 Found net devices under 0000:31:00.1: cvl_0_1 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:31:08.134 00:31:08.134 --- 10.0.0.2 ping statistics --- 00:31:08.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.134 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:31:08.134 00:31:08.134 --- 10.0.0.1 ping statistics --- 00:31:08.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.134 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=3290452 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 3290452 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3290452 ']' 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:08.134 15:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.134 [2024-10-01 15:48:47.042035] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:31:08.134 [2024-10-01 15:48:47.042101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.134 [2024-10-01 15:48:47.084766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:08.134 [2024-10-01 15:48:47.135438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.134 [2024-10-01 15:48:47.183320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.134 [2024-10-01 15:48:47.183377] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.134 [2024-10-01 15:48:47.183386] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.134 [2024-10-01 15:48:47.183393] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.134 [2024-10-01 15:48:47.183399] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.134 [2024-10-01 15:48:47.183551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.134 [2024-10-01 15:48:47.183709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.134 [2024-10-01 15:48:47.183870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.134 [2024-10-01 15:48:47.183871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.707 [2024-10-01 15:48:47.923878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.707 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 Malloc0 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 [2024-10-01 15:48:47.989595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 15:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 [ 00:31:08.708 { 00:31:08.708 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:08.708 "subtype": "Discovery", 00:31:08.708 "listen_addresses": [], 00:31:08.708 "allow_any_host": true, 00:31:08.708 "hosts": [] 00:31:08.708 }, 00:31:08.708 { 00:31:08.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.708 "subtype": "NVMe", 00:31:08.708 "listen_addresses": [ 00:31:08.708 { 00:31:08.708 "trtype": "TCP", 00:31:08.708 "adrfam": "IPv4", 00:31:08.708 "traddr": "10.0.0.2", 00:31:08.708 "trsvcid": "4420" 00:31:08.708 } 00:31:08.708 ], 00:31:08.708 "allow_any_host": true, 00:31:08.708 "hosts": [], 00:31:08.708 "serial_number": "SPDK00000000000001", 00:31:08.708 "model_number": "SPDK bdev Controller", 00:31:08.708 "max_namespaces": 2, 00:31:08.708 "min_cntlid": 1, 00:31:08.708 "max_cntlid": 65519, 00:31:08.708 "namespaces": [ 00:31:08.708 { 00:31:08.708 "nsid": 1, 00:31:08.708 "bdev_name": "Malloc0", 00:31:08.708 "name": "Malloc0", 00:31:08.708 "nguid": "16625DE119204964B30C02601C67DCCD", 00:31:08.708 "uuid": "16625de1-1920-4964-b30c-02601c67dccd" 00:31:08.708 } 00:31:08.708 ] 00:31:08.708 } 00:31:08.708 ] 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3290524 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:08.708 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:31:08.969 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:09.230 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.231 Malloc1 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.231 [ 00:31:09.231 { 00:31:09.231 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:09.231 "subtype": "Discovery", 00:31:09.231 "listen_addresses": [], 00:31:09.231 "allow_any_host": true, 00:31:09.231 "hosts": [] 00:31:09.231 }, 00:31:09.231 { 00:31:09.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:09.231 "subtype": "NVMe", 00:31:09.231 "listen_addresses": [ 00:31:09.231 { 00:31:09.231 "trtype": "TCP", 00:31:09.231 "adrfam": "IPv4", 00:31:09.231 "traddr": "10.0.0.2", 00:31:09.231 "trsvcid": "4420" 00:31:09.231 } 00:31:09.231 ], 00:31:09.231 "allow_any_host": true, 00:31:09.231 "hosts": [], 00:31:09.231 "serial_number": "SPDK00000000000001", 00:31:09.231 "model_number": "SPDK bdev Controller", 00:31:09.231 "max_namespaces": 2, 00:31:09.231 "min_cntlid": 1, 00:31:09.231 "max_cntlid": 65519, 00:31:09.231 "namespaces": [ 00:31:09.231 { 00:31:09.231 "nsid": 1, 00:31:09.231 "bdev_name": "Malloc0", 00:31:09.231 "name": "Malloc0", 00:31:09.231 "nguid": "16625DE119204964B30C02601C67DCCD", 00:31:09.231 "uuid": "16625de1-1920-4964-b30c-02601c67dccd" 00:31:09.231 }, 00:31:09.231 { 00:31:09.231 "nsid": 2, 00:31:09.231 "bdev_name": "Malloc1", 00:31:09.231 "name": "Malloc1", 00:31:09.231 "nguid": "B41EA6EB4CC64801B163950CFECDA2BD", 00:31:09.231 "uuid": "b41ea6eb-4cc6-4801-b163-950cfecda2bd" 00:31:09.231 Asynchronous Event Request test 00:31:09.231 Attaching to 10.0.0.2 00:31:09.231 Attached to 10.0.0.2 00:31:09.231 Registering asynchronous event callbacks... 00:31:09.231 Starting namespace attribute notice tests for all controllers... 00:31:09.231 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:09.231 aer_cb - Changed Namespace 00:31:09.231 Cleaning up... 00:31:09.231 } 00:31:09.231 ] 00:31:09.231 } 00:31:09.231 ] 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3290524 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.231 rmmod nvme_tcp 00:31:09.231 rmmod nvme_fabrics 00:31:09.231 rmmod nvme_keyring 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 3290452 ']' 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 3290452 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3290452 ']' 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3290452 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:09.231 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3290452 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3290452' 00:31:09.491 killing process with pid 3290452 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3290452 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3290452 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:09.491 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.492 15:48:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.035 15:48:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.035 00:31:12.035 real 0m12.001s 00:31:12.035 user 0m9.096s 00:31:12.035 sys 0m6.370s 00:31:12.035 15:48:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.035 15:48:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.035 ************************************ 00:31:12.035 END TEST nvmf_aer 00:31:12.035 ************************************ 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.035 ************************************ 00:31:12.035 START TEST nvmf_async_init 00:31:12.035 ************************************ 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:12.035 * Looking for test storage... 00:31:12.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.035 --rc genhtml_branch_coverage=1 00:31:12.035 --rc genhtml_function_coverage=1 00:31:12.035 --rc genhtml_legend=1 00:31:12.035 --rc geninfo_all_blocks=1 00:31:12.035 --rc geninfo_unexecuted_blocks=1 00:31:12.035 00:31:12.035 ' 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.035 --rc genhtml_branch_coverage=1 00:31:12.035 --rc genhtml_function_coverage=1 00:31:12.035 --rc genhtml_legend=1 00:31:12.035 --rc geninfo_all_blocks=1 00:31:12.035 --rc geninfo_unexecuted_blocks=1 00:31:12.035 00:31:12.035 ' 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.035 --rc genhtml_branch_coverage=1 00:31:12.035 --rc genhtml_function_coverage=1 00:31:12.035 --rc genhtml_legend=1 00:31:12.035 --rc geninfo_all_blocks=1 00:31:12.035 --rc geninfo_unexecuted_blocks=1 00:31:12.035 00:31:12.035 ' 00:31:12.035 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.035 --rc genhtml_branch_coverage=1 00:31:12.035 --rc genhtml_function_coverage=1 00:31:12.036 --rc genhtml_legend=1 00:31:12.036 --rc geninfo_all_blocks=1 00:31:12.036 --rc geninfo_unexecuted_blocks=1 00:31:12.036 00:31:12.036 ' 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:12.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4e394fd861a24c25a7ac240060e07cbc 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.036 15:48:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:20.178 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:20.178 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:20.178 Found net devices under 0000:31:00.0: cvl_0_0 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:20.178 Found net devices under 0000:31:00.1: cvl_0_1 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.178 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.179 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.179 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.179 15:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:31:20.179 00:31:20.179 --- 10.0.0.2 ping statistics --- 00:31:20.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.179 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:31:20.179 00:31:20.179 --- 10.0.0.1 ping statistics --- 00:31:20.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.179 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=3294927 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 3294927 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3294927 ']' 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:20.179 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.179 [2024-10-01 15:48:59.142172] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:31:20.179 [2024-10-01 15:48:59.142250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.179 [2024-10-01 15:48:59.183989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:20.179 [2024-10-01 15:48:59.231498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.179 [2024-10-01 15:48:59.277839] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.179 [2024-10-01 15:48:59.277899] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.179 [2024-10-01 15:48:59.277908] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.179 [2024-10-01 15:48:59.277915] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.179 [2024-10-01 15:48:59.277921] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.179 [2024-10-01 15:48:59.277945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.751 [2024-10-01 15:48:59.993386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.751 15:48:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.751 null0 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4e394fd861a24c25a7ac240060e07cbc 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:20.751 [2024-10-01 15:49:00.053794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.751 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.013 nvme0n1 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.013 [ 00:31:21.013 { 00:31:21.013 "name": "nvme0n1", 00:31:21.013 "aliases": [ 00:31:21.013 "4e394fd8-61a2-4c25-a7ac-240060e07cbc" 00:31:21.013 ], 00:31:21.013 "product_name": "NVMe disk", 00:31:21.013 "block_size": 512, 00:31:21.013 "num_blocks": 2097152, 00:31:21.013 "uuid": "4e394fd8-61a2-4c25-a7ac-240060e07cbc", 00:31:21.013 "numa_id": 0, 00:31:21.013 "assigned_rate_limits": { 00:31:21.013 "rw_ios_per_sec": 0, 00:31:21.013 "rw_mbytes_per_sec": 0, 00:31:21.013 "r_mbytes_per_sec": 0, 00:31:21.013 "w_mbytes_per_sec": 0 00:31:21.013 }, 00:31:21.013 "claimed": false, 00:31:21.013 "zoned": false, 00:31:21.013 "supported_io_types": { 00:31:21.013 "read": true, 00:31:21.013 "write": true, 00:31:21.013 "unmap": false, 00:31:21.013 "flush": true, 00:31:21.013 "reset": true, 00:31:21.013 "nvme_admin": true, 00:31:21.013 "nvme_io": true, 00:31:21.013 "nvme_io_md": false, 00:31:21.013 "write_zeroes": true, 00:31:21.013 "zcopy": false, 00:31:21.013 "get_zone_info": false, 00:31:21.013 "zone_management": false, 00:31:21.013 "zone_append": false, 00:31:21.013 "compare": true, 00:31:21.013 "compare_and_write": true, 00:31:21.013 "abort": true, 00:31:21.013 "seek_hole": false, 00:31:21.013 "seek_data": false, 00:31:21.013 "copy": true, 00:31:21.013 "nvme_iov_md": false 00:31:21.013 }, 00:31:21.013 "memory_domains": [ 00:31:21.013 { 00:31:21.013 "dma_device_id": "system", 00:31:21.013 "dma_device_type": 1 00:31:21.013 } 00:31:21.013 ], 00:31:21.013 "driver_specific": { 00:31:21.013 "nvme": [ 00:31:21.013 { 00:31:21.013 "trid": { 00:31:21.013 "trtype": "TCP", 00:31:21.013 "adrfam": "IPv4", 00:31:21.013 "traddr": "10.0.0.2", 00:31:21.013 "trsvcid": "4420", 00:31:21.013 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:21.013 }, 00:31:21.013 "ctrlr_data": { 00:31:21.013 "cntlid": 1, 00:31:21.013 "vendor_id": "0x8086", 00:31:21.013 "model_number": "SPDK bdev Controller", 00:31:21.013 "serial_number": "00000000000000000000", 00:31:21.013 "firmware_revision": "25.01", 00:31:21.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.013 "oacs": { 00:31:21.013 "security": 0, 00:31:21.013 "format": 0, 00:31:21.013 "firmware": 0, 00:31:21.013 "ns_manage": 0 00:31:21.013 }, 00:31:21.013 "multi_ctrlr": true, 00:31:21.013 "ana_reporting": false 00:31:21.013 }, 00:31:21.013 "vs": { 00:31:21.013 "nvme_version": "1.3" 00:31:21.013 }, 00:31:21.013 "ns_data": { 00:31:21.013 "id": 1, 00:31:21.013 "can_share": true 00:31:21.013 } 00:31:21.013 } 00:31:21.013 ], 00:31:21.013 "mp_policy": "active_passive" 00:31:21.013 } 00:31:21.013 } 00:31:21.013 ] 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.013 [2024-10-01 15:49:00.328468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:21.013 [2024-10-01 15:49:00.328557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x982ad0 (9): Bad file descriptor 00:31:21.013 [2024-10-01 15:49:00.461002] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.013 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.275 [ 00:31:21.275 { 00:31:21.275 "name": "nvme0n1", 00:31:21.275 "aliases": [ 00:31:21.275 "4e394fd8-61a2-4c25-a7ac-240060e07cbc" 00:31:21.275 ], 00:31:21.275 "product_name": "NVMe disk", 00:31:21.275 "block_size": 512, 00:31:21.275 "num_blocks": 2097152, 00:31:21.275 "uuid": "4e394fd8-61a2-4c25-a7ac-240060e07cbc", 00:31:21.275 "numa_id": 0, 00:31:21.275 "assigned_rate_limits": { 00:31:21.275 "rw_ios_per_sec": 0, 00:31:21.275 "rw_mbytes_per_sec": 0, 00:31:21.275 "r_mbytes_per_sec": 0, 00:31:21.275 "w_mbytes_per_sec": 0 00:31:21.275 }, 00:31:21.275 "claimed": false, 00:31:21.275 "zoned": false, 00:31:21.275 "supported_io_types": { 00:31:21.275 "read": true, 00:31:21.275 "write": true, 00:31:21.275 "unmap": false, 00:31:21.275 "flush": true, 00:31:21.275 "reset": true, 00:31:21.275 "nvme_admin": true, 00:31:21.275 "nvme_io": true, 00:31:21.275 "nvme_io_md": false, 00:31:21.275 "write_zeroes": true, 00:31:21.275 "zcopy": false, 00:31:21.275 "get_zone_info": false, 00:31:21.275 "zone_management": false, 00:31:21.275 "zone_append": false, 00:31:21.275 "compare": true, 00:31:21.275 "compare_and_write": true, 00:31:21.275 "abort": true, 00:31:21.276 "seek_hole": false, 00:31:21.276 "seek_data": false, 00:31:21.276 "copy": true, 00:31:21.276 "nvme_iov_md": false 00:31:21.276 }, 00:31:21.276 "memory_domains": [ 00:31:21.276 { 00:31:21.276 "dma_device_id": "system", 00:31:21.276 "dma_device_type": 1 00:31:21.276 } 00:31:21.276 ], 00:31:21.276 "driver_specific": { 00:31:21.276 "nvme": [ 00:31:21.276 { 00:31:21.276 "trid": { 00:31:21.276 "trtype": "TCP", 00:31:21.276 "adrfam": "IPv4", 00:31:21.276 "traddr": "10.0.0.2", 00:31:21.276 "trsvcid": "4420", 00:31:21.276 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:21.276 }, 00:31:21.276 "ctrlr_data": { 00:31:21.276 "cntlid": 2, 00:31:21.276 "vendor_id": "0x8086", 00:31:21.276 "model_number": "SPDK bdev Controller", 00:31:21.276 "serial_number": "00000000000000000000", 00:31:21.276 "firmware_revision": "25.01", 00:31:21.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.276 "oacs": { 00:31:21.276 "security": 0, 00:31:21.276 "format": 0, 00:31:21.276 "firmware": 0, 00:31:21.276 "ns_manage": 0 00:31:21.276 }, 00:31:21.276 "multi_ctrlr": true, 00:31:21.276 "ana_reporting": false 00:31:21.276 }, 00:31:21.276 "vs": { 00:31:21.276 "nvme_version": "1.3" 00:31:21.276 }, 00:31:21.276 "ns_data": { 00:31:21.276 "id": 1, 00:31:21.276 "can_share": true 00:31:21.276 } 00:31:21.276 } 00:31:21.276 ], 00:31:21.276 "mp_policy": "active_passive" 00:31:21.276 } 00:31:21.276 } 00:31:21.276 ] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YU9Z4AUbG2 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YU9Z4AUbG2 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.YU9Z4AUbG2 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 [2024-10-01 15:49:00.553177] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:21.276 [2024-10-01 15:49:00.553345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 [2024-10-01 15:49:00.577254] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:21.276 nvme0n1 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 [ 00:31:21.276 { 00:31:21.276 "name": "nvme0n1", 00:31:21.276 "aliases": [ 00:31:21.276 "4e394fd8-61a2-4c25-a7ac-240060e07cbc" 00:31:21.276 ], 00:31:21.276 "product_name": "NVMe disk", 00:31:21.276 "block_size": 512, 00:31:21.276 "num_blocks": 2097152, 00:31:21.276 "uuid": "4e394fd8-61a2-4c25-a7ac-240060e07cbc", 00:31:21.276 "numa_id": 0, 00:31:21.276 "assigned_rate_limits": { 00:31:21.276 "rw_ios_per_sec": 0, 00:31:21.276 "rw_mbytes_per_sec": 0, 00:31:21.276 "r_mbytes_per_sec": 0, 00:31:21.276 "w_mbytes_per_sec": 0 00:31:21.276 }, 00:31:21.276 "claimed": false, 00:31:21.276 "zoned": false, 00:31:21.276 "supported_io_types": { 00:31:21.276 "read": true, 00:31:21.276 "write": true, 00:31:21.276 "unmap": false, 00:31:21.276 "flush": true, 00:31:21.276 "reset": true, 00:31:21.276 "nvme_admin": true, 00:31:21.276 "nvme_io": true, 00:31:21.276 "nvme_io_md": false, 00:31:21.276 "write_zeroes": true, 00:31:21.276 "zcopy": false, 00:31:21.276 "get_zone_info": false, 00:31:21.276 "zone_management": false, 00:31:21.276 "zone_append": false, 00:31:21.276 "compare": true, 00:31:21.276 "compare_and_write": true, 00:31:21.276 "abort": true, 00:31:21.276 "seek_hole": false, 00:31:21.276 "seek_data": false, 00:31:21.276 "copy": true, 00:31:21.276 "nvme_iov_md": false 00:31:21.276 }, 00:31:21.276 "memory_domains": [ 00:31:21.276 { 00:31:21.276 "dma_device_id": "system", 00:31:21.276 "dma_device_type": 1 00:31:21.276 } 00:31:21.276 ], 00:31:21.276 "driver_specific": { 00:31:21.276 "nvme": [ 00:31:21.276 { 00:31:21.276 "trid": { 00:31:21.276 "trtype": "TCP", 00:31:21.276 "adrfam": "IPv4", 00:31:21.276 "traddr": "10.0.0.2", 00:31:21.276 "trsvcid": "4421", 00:31:21.276 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:21.276 }, 00:31:21.276 "ctrlr_data": { 00:31:21.276 "cntlid": 3, 00:31:21.276 "vendor_id": "0x8086", 00:31:21.276 "model_number": "SPDK bdev Controller", 00:31:21.276 "serial_number": "00000000000000000000", 00:31:21.276 "firmware_revision": "25.01", 00:31:21.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.276 "oacs": { 00:31:21.276 "security": 0, 00:31:21.276 "format": 0, 00:31:21.276 "firmware": 0, 00:31:21.276 "ns_manage": 0 00:31:21.276 }, 00:31:21.276 "multi_ctrlr": true, 00:31:21.276 "ana_reporting": false 00:31:21.276 }, 00:31:21.276 "vs": { 00:31:21.276 "nvme_version": "1.3" 00:31:21.276 }, 00:31:21.276 "ns_data": { 00:31:21.276 "id": 1, 00:31:21.276 "can_share": true 00:31:21.276 } 00:31:21.276 } 00:31:21.276 ], 00:31:21.276 "mp_policy": "active_passive" 00:31:21.276 } 00:31:21.276 } 00:31:21.276 ] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.YU9Z4AUbG2 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.276 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.276 rmmod nvme_tcp 00:31:21.538 rmmod nvme_fabrics 00:31:21.538 rmmod nvme_keyring 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 3294927 ']' 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 3294927 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3294927 ']' 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3294927 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3294927 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3294927' 00:31:21.538 killing process with pid 3294927 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3294927 00:31:21.538 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3294927 00:31:21.800 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:21.800 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:21.800 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:21.800 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:21.800 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:31:21.800 15:49:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:21.800 15:49:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:31:21.800 15:49:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.800 15:49:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.800 15:49:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.800 15:49:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.800 15:49:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.716 15:49:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.716 00:31:23.716 real 0m12.020s 00:31:23.716 user 0m4.335s 00:31:23.716 sys 0m6.248s 00:31:23.716 15:49:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:23.716 15:49:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.716 ************************************ 00:31:23.716 END TEST nvmf_async_init 00:31:23.716 ************************************ 00:31:23.716 15:49:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:23.716 15:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:23.716 15:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:23.716 15:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.716 ************************************ 00:31:23.716 START TEST dma 00:31:23.716 ************************************ 00:31:23.717 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:23.978 * Looking for test storage... 00:31:23.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.978 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:23.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.979 --rc genhtml_branch_coverage=1 00:31:23.979 --rc genhtml_function_coverage=1 00:31:23.979 --rc genhtml_legend=1 00:31:23.979 --rc geninfo_all_blocks=1 00:31:23.979 --rc geninfo_unexecuted_blocks=1 00:31:23.979 00:31:23.979 ' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:23.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.979 --rc genhtml_branch_coverage=1 00:31:23.979 --rc genhtml_function_coverage=1 00:31:23.979 --rc genhtml_legend=1 00:31:23.979 --rc geninfo_all_blocks=1 00:31:23.979 --rc geninfo_unexecuted_blocks=1 00:31:23.979 00:31:23.979 ' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:23.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.979 --rc genhtml_branch_coverage=1 00:31:23.979 --rc genhtml_function_coverage=1 00:31:23.979 --rc genhtml_legend=1 00:31:23.979 --rc geninfo_all_blocks=1 00:31:23.979 --rc geninfo_unexecuted_blocks=1 00:31:23.979 00:31:23.979 ' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:23.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.979 --rc genhtml_branch_coverage=1 00:31:23.979 --rc genhtml_function_coverage=1 00:31:23.979 --rc genhtml_legend=1 00:31:23.979 --rc geninfo_all_blocks=1 00:31:23.979 --rc geninfo_unexecuted_blocks=1 00:31:23.979 00:31:23.979 ' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:23.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:23.979 00:31:23.979 real 0m0.242s 00:31:23.979 user 0m0.149s 00:31:23.979 sys 0m0.107s 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:23.979 15:49:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:23.979 ************************************ 00:31:23.979 END TEST dma 00:31:23.979 ************************************ 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.241 ************************************ 00:31:24.241 START TEST nvmf_identify 00:31:24.241 ************************************ 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:24.241 * Looking for test storage... 00:31:24.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.241 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:24.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.241 --rc genhtml_branch_coverage=1 00:31:24.241 --rc genhtml_function_coverage=1 00:31:24.241 --rc genhtml_legend=1 00:31:24.241 --rc geninfo_all_blocks=1 00:31:24.241 --rc geninfo_unexecuted_blocks=1 00:31:24.241 00:31:24.241 ' 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:24.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.503 --rc genhtml_branch_coverage=1 00:31:24.503 --rc genhtml_function_coverage=1 00:31:24.503 --rc genhtml_legend=1 00:31:24.503 --rc geninfo_all_blocks=1 00:31:24.503 --rc geninfo_unexecuted_blocks=1 00:31:24.503 00:31:24.503 ' 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:24.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.503 --rc genhtml_branch_coverage=1 00:31:24.503 --rc genhtml_function_coverage=1 00:31:24.503 --rc genhtml_legend=1 00:31:24.503 --rc geninfo_all_blocks=1 00:31:24.503 --rc geninfo_unexecuted_blocks=1 00:31:24.503 00:31:24.503 ' 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:24.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.503 --rc genhtml_branch_coverage=1 00:31:24.503 --rc genhtml_function_coverage=1 00:31:24.503 --rc genhtml_legend=1 00:31:24.503 --rc geninfo_all_blocks=1 00:31:24.503 --rc geninfo_unexecuted_blocks=1 00:31:24.503 00:31:24.503 ' 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.503 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:24.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.504 15:49:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:32.653 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:32.653 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:32.653 Found net devices under 0000:31:00.0: cvl_0_0 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.653 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:32.654 Found net devices under 0000:31:00.1: cvl_0_1 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:31:32.654 00:31:32.654 --- 10.0.0.2 ping statistics --- 00:31:32.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.654 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:31:32.654 00:31:32.654 --- 10.0.0.1 ping statistics --- 00:31:32.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.654 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3299720 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3299720 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3299720 ']' 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.654 [2024-10-01 15:49:11.505141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:31:32.654 [2024-10-01 15:49:11.505218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.654 [2024-10-01 15:49:11.549227] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:32.654 [2024-10-01 15:49:11.574260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.654 [2024-10-01 15:49:11.620517] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.654 [2024-10-01 15:49:11.620564] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.654 [2024-10-01 15:49:11.620570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.654 [2024-10-01 15:49:11.620575] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.654 [2024-10-01 15:49:11.620579] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.654 [2024-10-01 15:49:11.620638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.654 [2024-10-01 15:49:11.620761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.654 [2024-10-01 15:49:11.620925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:32.654 [2024-10-01 15:49:11.620927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.654 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.654 [2024-10-01 15:49:11.720537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.655 Malloc0 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.655 [2024-10-01 15:49:11.822431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.655 [ 00:31:32.655 { 00:31:32.655 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:32.655 "subtype": "Discovery", 00:31:32.655 "listen_addresses": [ 00:31:32.655 { 00:31:32.655 "trtype": "TCP", 00:31:32.655 "adrfam": "IPv4", 00:31:32.655 "traddr": "10.0.0.2", 00:31:32.655 "trsvcid": "4420" 00:31:32.655 } 00:31:32.655 ], 00:31:32.655 "allow_any_host": true, 00:31:32.655 "hosts": [] 00:31:32.655 }, 00:31:32.655 { 00:31:32.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:32.655 "subtype": "NVMe", 00:31:32.655 "listen_addresses": [ 00:31:32.655 { 00:31:32.655 "trtype": "TCP", 00:31:32.655 "adrfam": "IPv4", 00:31:32.655 "traddr": "10.0.0.2", 00:31:32.655 "trsvcid": "4420" 00:31:32.655 } 00:31:32.655 ], 00:31:32.655 "allow_any_host": true, 00:31:32.655 "hosts": [], 00:31:32.655 "serial_number": "SPDK00000000000001", 00:31:32.655 "model_number": "SPDK bdev Controller", 00:31:32.655 "max_namespaces": 32, 00:31:32.655 "min_cntlid": 1, 00:31:32.655 "max_cntlid": 65519, 00:31:32.655 "namespaces": [ 00:31:32.655 { 00:31:32.655 "nsid": 1, 00:31:32.655 "bdev_name": "Malloc0", 00:31:32.655 "name": "Malloc0", 00:31:32.655 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:32.655 "eui64": "ABCDEF0123456789", 00:31:32.655 "uuid": "d669c4cc-0e8a-41fa-a505-ea2628e47058" 00:31:32.655 } 00:31:32.655 ] 00:31:32.655 } 00:31:32.655 ] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.655 15:49:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:32.655 [2024-10-01 15:49:11.877876] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:31:32.655 [2024-10-01 15:49:11.877927] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299745 ] 00:31:32.655 [2024-10-01 15:49:11.890538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:32.655 [2024-10-01 15:49:11.914036] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:32.655 [2024-10-01 15:49:11.914100] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:32.655 [2024-10-01 15:49:11.914106] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:32.655 [2024-10-01 15:49:11.914121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:32.655 [2024-10-01 15:49:11.914134] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:32.655 [2024-10-01 15:49:11.918410] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:32.655 [2024-10-01 15:49:11.918470] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1556060 0 00:31:32.655 [2024-10-01 15:49:11.925912] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:32.655 [2024-10-01 15:49:11.925932] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:32.655 [2024-10-01 15:49:11.925938] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:32.655 [2024-10-01 15:49:11.925942] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:32.655 [2024-10-01 15:49:11.925981] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.925988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.925993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.655 [2024-10-01 15:49:11.926013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:32.655 [2024-10-01 15:49:11.926039] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.655 [2024-10-01 15:49:11.933909] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.655 [2024-10-01 15:49:11.933920] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.655 [2024-10-01 15:49:11.933924] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.933930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.655 [2024-10-01 15:49:11.933945] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:32.655 [2024-10-01 15:49:11.933953] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:32.655 [2024-10-01 15:49:11.933960] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:32.655 [2024-10-01 15:49:11.933977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.933981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.933985] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.655 [2024-10-01 15:49:11.933994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.655 [2024-10-01 15:49:11.934011] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.655 [2024-10-01 15:49:11.934262] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.655 [2024-10-01 15:49:11.934269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.655 [2024-10-01 15:49:11.934272] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.934276] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.655 [2024-10-01 15:49:11.934282] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:32.655 [2024-10-01 15:49:11.934290] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:32.655 [2024-10-01 15:49:11.934297] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.934300] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.655 [2024-10-01 15:49:11.934304] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.656 [2024-10-01 15:49:11.934311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.656 [2024-10-01 15:49:11.934322] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.656 [2024-10-01 15:49:11.934524] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.656 [2024-10-01 15:49:11.934531] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.656 [2024-10-01 15:49:11.934538] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.934543] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.656 [2024-10-01 15:49:11.934548] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:32.656 [2024-10-01 15:49:11.934557] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:32.656 [2024-10-01 15:49:11.934564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.934568] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.934571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.656 [2024-10-01 15:49:11.934578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.656 [2024-10-01 15:49:11.934588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.656 [2024-10-01 15:49:11.934806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.656 [2024-10-01 15:49:11.934812] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.656 [2024-10-01 15:49:11.934816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.934820] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.656 [2024-10-01 15:49:11.934825] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:32.656 [2024-10-01 15:49:11.934835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.934839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.934842] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.656 [2024-10-01 15:49:11.934849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.656 [2024-10-01 15:49:11.934859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.656 [2024-10-01 15:49:11.935058] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.656 [2024-10-01 15:49:11.935065] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.656 [2024-10-01 15:49:11.935068] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935072] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.656 [2024-10-01 15:49:11.935077] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:32.656 [2024-10-01 15:49:11.935082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:32.656 [2024-10-01 15:49:11.935090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:32.656 [2024-10-01 15:49:11.935195] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:32.656 [2024-10-01 15:49:11.935201] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:32.656 [2024-10-01 15:49:11.935211] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935215] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935219] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.656 [2024-10-01 15:49:11.935225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.656 [2024-10-01 15:49:11.935239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.656 [2024-10-01 15:49:11.935454] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.656 [2024-10-01 15:49:11.935461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.656 [2024-10-01 15:49:11.935464] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935468] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.656 [2024-10-01 15:49:11.935473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:32.656 [2024-10-01 15:49:11.935483] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935486] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.656 [2024-10-01 15:49:11.935497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.656 [2024-10-01 15:49:11.935507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.656 [2024-10-01 15:49:11.935691] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.656 [2024-10-01 15:49:11.935698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.656 [2024-10-01 15:49:11.935701] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935705] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.656 [2024-10-01 15:49:11.935710] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:32.656 [2024-10-01 15:49:11.935715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:32.656 [2024-10-01 15:49:11.935723] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:32.656 [2024-10-01 15:49:11.935740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:32.656 [2024-10-01 15:49:11.935750] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.935753] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.656 [2024-10-01 15:49:11.935761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.656 [2024-10-01 15:49:11.935771] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.656 [2024-10-01 15:49:11.936069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.656 [2024-10-01 15:49:11.936076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.656 [2024-10-01 15:49:11.936080] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.936084] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1556060): datao=0, datal=4096, cccid=0 00:31:32.656 [2024-10-01 15:49:11.936090] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c2a80) on tqpair(0x1556060): expected_datao=0, payload_size=4096 00:31:32.656 [2024-10-01 15:49:11.936095] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.936104] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.936108] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.936241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.656 [2024-10-01 15:49:11.936248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.656 [2024-10-01 15:49:11.936251] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.936258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.656 [2024-10-01 15:49:11.936267] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:32.656 [2024-10-01 15:49:11.936272] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:32.656 [2024-10-01 15:49:11.936277] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:32.656 [2024-10-01 15:49:11.936283] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:32.656 [2024-10-01 15:49:11.936288] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:32.656 [2024-10-01 15:49:11.936292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:32.656 [2024-10-01 15:49:11.936302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:32.656 [2024-10-01 15:49:11.936309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.936313] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.656 [2024-10-01 15:49:11.936317] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.656 [2024-10-01 15:49:11.936324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:32.656 [2024-10-01 15:49:11.936335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.657 [2024-10-01 15:49:11.936566] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.657 [2024-10-01 15:49:11.936572] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.657 [2024-10-01 15:49:11.936576] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936580] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.657 [2024-10-01 15:49:11.936589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936593] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936597] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.936603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.657 [2024-10-01 15:49:11.936609] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936613] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936617] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.936623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.657 [2024-10-01 15:49:11.936629] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936636] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.936642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.657 [2024-10-01 15:49:11.936648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936652] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936656] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.936661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.657 [2024-10-01 15:49:11.936669] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:32.657 [2024-10-01 15:49:11.936681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:32.657 [2024-10-01 15:49:11.936688] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.936691] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.936698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.657 [2024-10-01 15:49:11.936710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2a80, cid 0, qid 0 00:31:32.657 [2024-10-01 15:49:11.936715] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2c00, cid 1, qid 0 00:31:32.657 [2024-10-01 15:49:11.936720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2d80, cid 2, qid 0 00:31:32.657 [2024-10-01 15:49:11.936725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.657 [2024-10-01 15:49:11.936730] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c3080, cid 4, qid 0 00:31:32.657 [2024-10-01 15:49:11.936997] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.657 [2024-10-01 15:49:11.937004] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.657 [2024-10-01 15:49:11.937007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.937011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c3080) on tqpair=0x1556060 00:31:32.657 [2024-10-01 15:49:11.937017] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:32.657 [2024-10-01 15:49:11.937023] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:32.657 [2024-10-01 15:49:11.937033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.937037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.937044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.657 [2024-10-01 15:49:11.937054] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c3080, cid 4, qid 0 00:31:32.657 [2024-10-01 15:49:11.937258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.657 [2024-10-01 15:49:11.937264] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.657 [2024-10-01 15:49:11.937268] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.937272] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1556060): datao=0, datal=4096, cccid=4 00:31:32.657 [2024-10-01 15:49:11.937276] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c3080) on tqpair(0x1556060): expected_datao=0, payload_size=4096 00:31:32.657 [2024-10-01 15:49:11.937281] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.937294] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.937298] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.979902] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.657 [2024-10-01 15:49:11.979913] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.657 [2024-10-01 15:49:11.979917] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.979921] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c3080) on tqpair=0x1556060 00:31:32.657 [2024-10-01 15:49:11.979937] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:32.657 [2024-10-01 15:49:11.979977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.979983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.979990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.657 [2024-10-01 15:49:11.979998] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.980002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.980005] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:11.980012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.657 [2024-10-01 15:49:11.980026] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c3080, cid 4, qid 0 00:31:32.657 [2024-10-01 15:49:11.980032] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c3200, cid 5, qid 0 00:31:32.657 [2024-10-01 15:49:11.980302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.657 [2024-10-01 15:49:11.980308] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.657 [2024-10-01 15:49:11.980312] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.980316] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1556060): datao=0, datal=1024, cccid=4 00:31:32.657 [2024-10-01 15:49:11.980320] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c3080) on tqpair(0x1556060): expected_datao=0, payload_size=1024 00:31:32.657 [2024-10-01 15:49:11.980325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.980332] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.980336] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.980341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.657 [2024-10-01 15:49:11.980347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.657 [2024-10-01 15:49:11.980351] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:11.980355] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c3200) on tqpair=0x1556060 00:31:32.657 [2024-10-01 15:49:12.022102] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.657 [2024-10-01 15:49:12.022113] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.657 [2024-10-01 15:49:12.022117] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:12.022121] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c3080) on tqpair=0x1556060 00:31:32.657 [2024-10-01 15:49:12.022134] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:12.022138] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1556060) 00:31:32.657 [2024-10-01 15:49:12.022145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.657 [2024-10-01 15:49:12.022162] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c3080, cid 4, qid 0 00:31:32.657 [2024-10-01 15:49:12.022383] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.657 [2024-10-01 15:49:12.022390] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.657 [2024-10-01 15:49:12.022393] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.657 [2024-10-01 15:49:12.022397] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1556060): datao=0, datal=3072, cccid=4 00:31:32.658 [2024-10-01 15:49:12.022402] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c3080) on tqpair(0x1556060): expected_datao=0, payload_size=3072 00:31:32.658 [2024-10-01 15:49:12.022406] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022413] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022421] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022566] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.658 [2024-10-01 15:49:12.022572] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.658 [2024-10-01 15:49:12.022575] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022579] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c3080) on tqpair=0x1556060 00:31:32.658 [2024-10-01 15:49:12.022587] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1556060) 00:31:32.658 [2024-10-01 15:49:12.022597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.658 [2024-10-01 15:49:12.022612] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c3080, cid 4, qid 0 00:31:32.658 [2024-10-01 15:49:12.022882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.658 [2024-10-01 15:49:12.022888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.658 [2024-10-01 15:49:12.022892] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022900] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1556060): datao=0, datal=8, cccid=4 00:31:32.658 [2024-10-01 15:49:12.022904] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c3080) on tqpair(0x1556060): expected_datao=0, payload_size=8 00:31:32.658 [2024-10-01 15:49:12.022909] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022915] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.022919] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.067902] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.658 [2024-10-01 15:49:12.067911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.658 [2024-10-01 15:49:12.067915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.658 [2024-10-01 15:49:12.067919] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c3080) on tqpair=0x1556060 00:31:32.658 ===================================================== 00:31:32.658 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:32.658 ===================================================== 00:31:32.658 Controller Capabilities/Features 00:31:32.658 ================================ 00:31:32.658 Vendor ID: 0000 00:31:32.658 Subsystem Vendor ID: 0000 00:31:32.658 Serial Number: .................... 00:31:32.658 Model Number: ........................................ 00:31:32.658 Firmware Version: 25.01 00:31:32.658 Recommended Arb Burst: 0 00:31:32.658 IEEE OUI Identifier: 00 00 00 00:31:32.658 Multi-path I/O 00:31:32.658 May have multiple subsystem ports: No 00:31:32.658 May have multiple controllers: No 00:31:32.658 Associated with SR-IOV VF: No 00:31:32.658 Max Data Transfer Size: 131072 00:31:32.658 Max Number of Namespaces: 0 00:31:32.658 Max Number of I/O Queues: 1024 00:31:32.658 NVMe Specification Version (VS): 1.3 00:31:32.658 NVMe Specification Version (Identify): 1.3 00:31:32.658 Maximum Queue Entries: 128 00:31:32.658 Contiguous Queues Required: Yes 00:31:32.658 Arbitration Mechanisms Supported 00:31:32.658 Weighted Round Robin: Not Supported 00:31:32.658 Vendor Specific: Not Supported 00:31:32.658 Reset Timeout: 15000 ms 00:31:32.658 Doorbell Stride: 4 bytes 00:31:32.658 NVM Subsystem Reset: Not Supported 00:31:32.658 Command Sets Supported 00:31:32.658 NVM Command Set: Supported 00:31:32.658 Boot Partition: Not Supported 00:31:32.658 Memory Page Size Minimum: 4096 bytes 00:31:32.658 Memory Page Size Maximum: 4096 bytes 00:31:32.658 Persistent Memory Region: Not Supported 00:31:32.658 Optional Asynchronous Events Supported 00:31:32.658 Namespace Attribute Notices: Not Supported 00:31:32.658 Firmware Activation Notices: Not Supported 00:31:32.658 ANA Change Notices: Not Supported 00:31:32.658 PLE Aggregate Log Change Notices: Not Supported 00:31:32.658 LBA Status Info Alert Notices: Not Supported 00:31:32.658 EGE Aggregate Log Change Notices: Not Supported 00:31:32.658 Normal NVM Subsystem Shutdown event: Not Supported 00:31:32.658 Zone Descriptor Change Notices: Not Supported 00:31:32.658 Discovery Log Change Notices: Supported 00:31:32.658 Controller Attributes 00:31:32.658 128-bit Host Identifier: Not Supported 00:31:32.658 Non-Operational Permissive Mode: Not Supported 00:31:32.658 NVM Sets: Not Supported 00:31:32.658 Read Recovery Levels: Not Supported 00:31:32.658 Endurance Groups: Not Supported 00:31:32.658 Predictable Latency Mode: Not Supported 00:31:32.658 Traffic Based Keep ALive: Not Supported 00:31:32.658 Namespace Granularity: Not Supported 00:31:32.658 SQ Associations: Not Supported 00:31:32.658 UUID List: Not Supported 00:31:32.658 Multi-Domain Subsystem: Not Supported 00:31:32.658 Fixed Capacity Management: Not Supported 00:31:32.658 Variable Capacity Management: Not Supported 00:31:32.658 Delete Endurance Group: Not Supported 00:31:32.658 Delete NVM Set: Not Supported 00:31:32.658 Extended LBA Formats Supported: Not Supported 00:31:32.658 Flexible Data Placement Supported: Not Supported 00:31:32.658 00:31:32.658 Controller Memory Buffer Support 00:31:32.658 ================================ 00:31:32.658 Supported: No 00:31:32.658 00:31:32.658 Persistent Memory Region Support 00:31:32.658 ================================ 00:31:32.658 Supported: No 00:31:32.658 00:31:32.658 Admin Command Set Attributes 00:31:32.658 ============================ 00:31:32.658 Security Send/Receive: Not Supported 00:31:32.658 Format NVM: Not Supported 00:31:32.658 Firmware Activate/Download: Not Supported 00:31:32.658 Namespace Management: Not Supported 00:31:32.658 Device Self-Test: Not Supported 00:31:32.658 Directives: Not Supported 00:31:32.658 NVMe-MI: Not Supported 00:31:32.658 Virtualization Management: Not Supported 00:31:32.658 Doorbell Buffer Config: Not Supported 00:31:32.658 Get LBA Status Capability: Not Supported 00:31:32.658 Command & Feature Lockdown Capability: Not Supported 00:31:32.659 Abort Command Limit: 1 00:31:32.659 Async Event Request Limit: 4 00:31:32.659 Number of Firmware Slots: N/A 00:31:32.659 Firmware Slot 1 Read-Only: N/A 00:31:32.659 Firmware Activation Without Reset: N/A 00:31:32.659 Multiple Update Detection Support: N/A 00:31:32.659 Firmware Update Granularity: No Information Provided 00:31:32.659 Per-Namespace SMART Log: No 00:31:32.659 Asymmetric Namespace Access Log Page: Not Supported 00:31:32.659 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:32.659 Command Effects Log Page: Not Supported 00:31:32.659 Get Log Page Extended Data: Supported 00:31:32.659 Telemetry Log Pages: Not Supported 00:31:32.659 Persistent Event Log Pages: Not Supported 00:31:32.659 Supported Log Pages Log Page: May Support 00:31:32.659 Commands Supported & Effects Log Page: Not Supported 00:31:32.659 Feature Identifiers & Effects Log Page:May Support 00:31:32.659 NVMe-MI Commands & Effects Log Page: May Support 00:31:32.659 Data Area 4 for Telemetry Log: Not Supported 00:31:32.659 Error Log Page Entries Supported: 128 00:31:32.659 Keep Alive: Not Supported 00:31:32.659 00:31:32.659 NVM Command Set Attributes 00:31:32.659 ========================== 00:31:32.659 Submission Queue Entry Size 00:31:32.659 Max: 1 00:31:32.659 Min: 1 00:31:32.659 Completion Queue Entry Size 00:31:32.659 Max: 1 00:31:32.659 Min: 1 00:31:32.659 Number of Namespaces: 0 00:31:32.659 Compare Command: Not Supported 00:31:32.659 Write Uncorrectable Command: Not Supported 00:31:32.659 Dataset Management Command: Not Supported 00:31:32.659 Write Zeroes Command: Not Supported 00:31:32.659 Set Features Save Field: Not Supported 00:31:32.659 Reservations: Not Supported 00:31:32.659 Timestamp: Not Supported 00:31:32.659 Copy: Not Supported 00:31:32.659 Volatile Write Cache: Not Present 00:31:32.659 Atomic Write Unit (Normal): 1 00:31:32.659 Atomic Write Unit (PFail): 1 00:31:32.659 Atomic Compare & Write Unit: 1 00:31:32.659 Fused Compare & Write: Supported 00:31:32.659 Scatter-Gather List 00:31:32.659 SGL Command Set: Supported 00:31:32.659 SGL Keyed: Supported 00:31:32.659 SGL Bit Bucket Descriptor: Not Supported 00:31:32.659 SGL Metadata Pointer: Not Supported 00:31:32.659 Oversized SGL: Not Supported 00:31:32.659 SGL Metadata Address: Not Supported 00:31:32.659 SGL Offset: Supported 00:31:32.659 Transport SGL Data Block: Not Supported 00:31:32.659 Replay Protected Memory Block: Not Supported 00:31:32.659 00:31:32.659 Firmware Slot Information 00:31:32.659 ========================= 00:31:32.659 Active slot: 0 00:31:32.659 00:31:32.659 00:31:32.659 Error Log 00:31:32.659 ========= 00:31:32.659 00:31:32.659 Active Namespaces 00:31:32.659 ================= 00:31:32.659 Discovery Log Page 00:31:32.659 ================== 00:31:32.659 Generation Counter: 2 00:31:32.659 Number of Records: 2 00:31:32.659 Record Format: 0 00:31:32.659 00:31:32.659 Discovery Log Entry 0 00:31:32.659 ---------------------- 00:31:32.659 Transport Type: 3 (TCP) 00:31:32.659 Address Family: 1 (IPv4) 00:31:32.659 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:32.659 Entry Flags: 00:31:32.659 Duplicate Returned Information: 1 00:31:32.659 Explicit Persistent Connection Support for Discovery: 1 00:31:32.659 Transport Requirements: 00:31:32.659 Secure Channel: Not Required 00:31:32.659 Port ID: 0 (0x0000) 00:31:32.659 Controller ID: 65535 (0xffff) 00:31:32.659 Admin Max SQ Size: 128 00:31:32.659 Transport Service Identifier: 4420 00:31:32.659 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:32.659 Transport Address: 10.0.0.2 00:31:32.659 Discovery Log Entry 1 00:31:32.659 ---------------------- 00:31:32.659 Transport Type: 3 (TCP) 00:31:32.659 Address Family: 1 (IPv4) 00:31:32.659 Subsystem Type: 2 (NVM Subsystem) 00:31:32.659 Entry Flags: 00:31:32.659 Duplicate Returned Information: 0 00:31:32.659 Explicit Persistent Connection Support for Discovery: 0 00:31:32.659 Transport Requirements: 00:31:32.659 Secure Channel: Not Required 00:31:32.659 Port ID: 0 (0x0000) 00:31:32.659 Controller ID: 65535 (0xffff) 00:31:32.659 Admin Max SQ Size: 128 00:31:32.659 Transport Service Identifier: 4420 00:31:32.659 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:32.659 Transport Address: 10.0.0.2 [2024-10-01 15:49:12.068015] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:32.659 [2024-10-01 15:49:12.068027] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2a80) on tqpair=0x1556060 00:31:32.659 [2024-10-01 15:49:12.068034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.659 [2024-10-01 15:49:12.068040] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2c00) on tqpair=0x1556060 00:31:32.659 [2024-10-01 15:49:12.068045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.659 [2024-10-01 15:49:12.068050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2d80) on tqpair=0x1556060 00:31:32.659 [2024-10-01 15:49:12.068055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.659 [2024-10-01 15:49:12.068060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.659 [2024-10-01 15:49:12.068065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.659 [2024-10-01 15:49:12.068075] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.659 [2024-10-01 15:49:12.068078] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.659 [2024-10-01 15:49:12.068082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.659 [2024-10-01 15:49:12.068090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.659 [2024-10-01 15:49:12.068105] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.659 [2024-10-01 15:49:12.068325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.659 [2024-10-01 15:49:12.068332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.659 [2024-10-01 15:49:12.068335] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.659 [2024-10-01 15:49:12.068339] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.659 [2024-10-01 15:49:12.068347] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.659 [2024-10-01 15:49:12.068351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.659 [2024-10-01 15:49:12.068354] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.659 [2024-10-01 15:49:12.068361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.659 [2024-10-01 15:49:12.068375] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.659 [2024-10-01 15:49:12.068598] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.659 [2024-10-01 15:49:12.068605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.659 [2024-10-01 15:49:12.068608] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.068612] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.068617] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:32.660 [2024-10-01 15:49:12.068626] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:32.660 [2024-10-01 15:49:12.068635] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.068639] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.068643] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.068650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.068660] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.068877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.068883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.068887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.068891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.068906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.068910] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.068914] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.068921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.068931] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.069181] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.069188] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.069191] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.069205] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069209] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069213] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.069222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.069233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.069432] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.069438] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.069442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069445] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.069455] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069459] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069463] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.069469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.069479] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.069652] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.069658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.069662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069666] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.069676] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069680] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069683] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.069690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.069700] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.069935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.069942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.069945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069949] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.069959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.069966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.069973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.069984] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.070189] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.070195] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.070198] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070202] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.070212] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070216] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.070226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.070241] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.070491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.070497] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.070501] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070504] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.070514] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070518] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070522] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.070528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.070538] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.070756] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.070762] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.070766] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070770] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.070779] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070783] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.070787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.660 [2024-10-01 15:49:12.070794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.660 [2024-10-01 15:49:12.070804] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.660 [2024-10-01 15:49:12.071045] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.660 [2024-10-01 15:49:12.071051] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.660 [2024-10-01 15:49:12.071055] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.071059] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.660 [2024-10-01 15:49:12.071069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.071073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.660 [2024-10-01 15:49:12.071076] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.661 [2024-10-01 15:49:12.071083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.661 [2024-10-01 15:49:12.071093] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.661 [2024-10-01 15:49:12.071347] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.661 [2024-10-01 15:49:12.071354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.661 [2024-10-01 15:49:12.071357] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.071361] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.661 [2024-10-01 15:49:12.071371] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.071375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.071378] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.661 [2024-10-01 15:49:12.071385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.661 [2024-10-01 15:49:12.071395] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.661 [2024-10-01 15:49:12.071650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.661 [2024-10-01 15:49:12.071657] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.661 [2024-10-01 15:49:12.071660] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.071664] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.661 [2024-10-01 15:49:12.071674] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.071678] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.071681] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.661 [2024-10-01 15:49:12.071688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.661 [2024-10-01 15:49:12.071698] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.661 [2024-10-01 15:49:12.075901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.661 [2024-10-01 15:49:12.075909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.661 [2024-10-01 15:49:12.075913] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.075917] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.661 [2024-10-01 15:49:12.075927] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.075931] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.075934] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1556060) 00:31:32.661 [2024-10-01 15:49:12.075941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.661 [2024-10-01 15:49:12.075953] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c2f00, cid 3, qid 0 00:31:32.661 [2024-10-01 15:49:12.076184] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.661 [2024-10-01 15:49:12.076191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.661 [2024-10-01 15:49:12.076194] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.661 [2024-10-01 15:49:12.076198] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c2f00) on tqpair=0x1556060 00:31:32.661 [2024-10-01 15:49:12.076206] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:31:32.661 00:31:32.661 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:32.929 [2024-10-01 15:49:12.122605] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:31:32.929 [2024-10-01 15:49:12.122650] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299754 ] 00:31:32.929 [2024-10-01 15:49:12.137505] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:32.929 [2024-10-01 15:49:12.160950] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:32.929 [2024-10-01 15:49:12.161003] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:32.929 [2024-10-01 15:49:12.161008] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:32.929 [2024-10-01 15:49:12.161029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:32.929 [2024-10-01 15:49:12.161038] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:32.929 [2024-10-01 15:49:12.161794] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:32.929 [2024-10-01 15:49:12.161832] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbc9060 0 00:31:32.929 [2024-10-01 15:49:12.175914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:32.929 [2024-10-01 15:49:12.175931] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:32.929 [2024-10-01 15:49:12.175935] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:32.929 [2024-10-01 15:49:12.175939] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:32.929 [2024-10-01 15:49:12.175971] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.175977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.175981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.929 [2024-10-01 15:49:12.175996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:32.929 [2024-10-01 15:49:12.176018] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.929 [2024-10-01 15:49:12.183906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.929 [2024-10-01 15:49:12.183919] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.929 [2024-10-01 15:49:12.183923] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.183927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.929 [2024-10-01 15:49:12.183940] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:32.929 [2024-10-01 15:49:12.183947] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:32.929 [2024-10-01 15:49:12.183953] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:32.929 [2024-10-01 15:49:12.183967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.183971] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.183975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.929 [2024-10-01 15:49:12.183983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.929 [2024-10-01 15:49:12.183999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.929 [2024-10-01 15:49:12.184233] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.929 [2024-10-01 15:49:12.184240] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.929 [2024-10-01 15:49:12.184244] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184248] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.929 [2024-10-01 15:49:12.184253] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:32.929 [2024-10-01 15:49:12.184262] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:32.929 [2024-10-01 15:49:12.184269] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.929 [2024-10-01 15:49:12.184283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.929 [2024-10-01 15:49:12.184294] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.929 [2024-10-01 15:49:12.184550] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.929 [2024-10-01 15:49:12.184557] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.929 [2024-10-01 15:49:12.184561] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184565] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.929 [2024-10-01 15:49:12.184570] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:32.929 [2024-10-01 15:49:12.184578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:32.929 [2024-10-01 15:49:12.184585] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184589] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184592] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.929 [2024-10-01 15:49:12.184599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.929 [2024-10-01 15:49:12.184610] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.929 [2024-10-01 15:49:12.184689] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.929 [2024-10-01 15:49:12.184695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.929 [2024-10-01 15:49:12.184699] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.929 [2024-10-01 15:49:12.184708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:32.929 [2024-10-01 15:49:12.184717] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184721] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.929 [2024-10-01 15:49:12.184725] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.184731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.930 [2024-10-01 15:49:12.184743] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.930 [2024-10-01 15:49:12.184817] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.930 [2024-10-01 15:49:12.184823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.930 [2024-10-01 15:49:12.184826] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.184830] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.930 [2024-10-01 15:49:12.184835] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:32.930 [2024-10-01 15:49:12.184840] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:32.930 [2024-10-01 15:49:12.184848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:32.930 [2024-10-01 15:49:12.184954] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:32.930 [2024-10-01 15:49:12.184959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:32.930 [2024-10-01 15:49:12.184967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.184971] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.184975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.184985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.930 [2024-10-01 15:49:12.184998] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.930 [2024-10-01 15:49:12.185242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.930 [2024-10-01 15:49:12.185249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.930 [2024-10-01 15:49:12.185253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.930 [2024-10-01 15:49:12.185262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:32.930 [2024-10-01 15:49:12.185273] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185280] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185284] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.185292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.930 [2024-10-01 15:49:12.185304] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.930 [2024-10-01 15:49:12.185471] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.930 [2024-10-01 15:49:12.185478] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.930 [2024-10-01 15:49:12.185482] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.930 [2024-10-01 15:49:12.185490] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:32.930 [2024-10-01 15:49:12.185495] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:32.930 [2024-10-01 15:49:12.185502] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:32.930 [2024-10-01 15:49:12.185510] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:32.930 [2024-10-01 15:49:12.185519] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185523] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.185530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.930 [2024-10-01 15:49:12.185540] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.930 [2024-10-01 15:49:12.185780] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.930 [2024-10-01 15:49:12.185786] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.930 [2024-10-01 15:49:12.185790] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185795] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=4096, cccid=0 00:31:32.930 [2024-10-01 15:49:12.185799] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc35a80) on tqpair(0xbc9060): expected_datao=0, payload_size=4096 00:31:32.930 [2024-10-01 15:49:12.185804] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185812] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185816] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.930 [2024-10-01 15:49:12.185964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.930 [2024-10-01 15:49:12.185970] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.185975] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.930 [2024-10-01 15:49:12.185990] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:32.930 [2024-10-01 15:49:12.185998] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:32.930 [2024-10-01 15:49:12.186004] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:32.930 [2024-10-01 15:49:12.186009] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:32.930 [2024-10-01 15:49:12.186014] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:32.930 [2024-10-01 15:49:12.186018] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:32.930 [2024-10-01 15:49:12.186027] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:32.930 [2024-10-01 15:49:12.186033] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186039] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186043] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.186050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:32.930 [2024-10-01 15:49:12.186061] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.930 [2024-10-01 15:49:12.186239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.930 [2024-10-01 15:49:12.186247] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.930 [2024-10-01 15:49:12.186252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.930 [2024-10-01 15:49:12.186264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186272] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.186278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.930 [2024-10-01 15:49:12.186288] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186292] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186296] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.186302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.930 [2024-10-01 15:49:12.186308] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186312] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbc9060) 00:31:32.930 [2024-10-01 15:49:12.186324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.930 [2024-10-01 15:49:12.186331] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.930 [2024-10-01 15:49:12.186335] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.186338] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.931 [2024-10-01 15:49:12.186344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.931 [2024-10-01 15:49:12.186351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.186363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.186370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.186376] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc9060) 00:31:32.931 [2024-10-01 15:49:12.186383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.931 [2024-10-01 15:49:12.186396] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35a80, cid 0, qid 0 00:31:32.931 [2024-10-01 15:49:12.186401] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35c00, cid 1, qid 0 00:31:32.931 [2024-10-01 15:49:12.186406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35d80, cid 2, qid 0 00:31:32.931 [2024-10-01 15:49:12.186412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.931 [2024-10-01 15:49:12.186418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36080, cid 4, qid 0 00:31:32.931 [2024-10-01 15:49:12.186682] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.931 [2024-10-01 15:49:12.186690] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.931 [2024-10-01 15:49:12.186693] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.186697] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36080) on tqpair=0xbc9060 00:31:32.931 [2024-10-01 15:49:12.186702] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:32.931 [2024-10-01 15:49:12.186707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.186716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.186725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.186732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.186736] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.186739] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc9060) 00:31:32.931 [2024-10-01 15:49:12.186746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:32.931 [2024-10-01 15:49:12.186756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36080, cid 4, qid 0 00:31:32.931 [2024-10-01 15:49:12.186943] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.931 [2024-10-01 15:49:12.186951] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.931 [2024-10-01 15:49:12.186955] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.186959] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36080) on tqpair=0xbc9060 00:31:32.931 [2024-10-01 15:49:12.187029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.187040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.187047] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187051] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc9060) 00:31:32.931 [2024-10-01 15:49:12.187058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.931 [2024-10-01 15:49:12.187071] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36080, cid 4, qid 0 00:31:32.931 [2024-10-01 15:49:12.187312] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.931 [2024-10-01 15:49:12.187319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.931 [2024-10-01 15:49:12.187323] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187326] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=4096, cccid=4 00:31:32.931 [2024-10-01 15:49:12.187331] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc36080) on tqpair(0xbc9060): expected_datao=0, payload_size=4096 00:31:32.931 [2024-10-01 15:49:12.187335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187343] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187347] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.931 [2024-10-01 15:49:12.187506] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.931 [2024-10-01 15:49:12.187509] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187513] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36080) on tqpair=0xbc9060 00:31:32.931 [2024-10-01 15:49:12.187522] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:32.931 [2024-10-01 15:49:12.187539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.187549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.187556] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187560] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc9060) 00:31:32.931 [2024-10-01 15:49:12.187566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.931 [2024-10-01 15:49:12.187577] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36080, cid 4, qid 0 00:31:32.931 [2024-10-01 15:49:12.187768] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.931 [2024-10-01 15:49:12.187774] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.931 [2024-10-01 15:49:12.187778] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187781] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=4096, cccid=4 00:31:32.931 [2024-10-01 15:49:12.187786] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc36080) on tqpair(0xbc9060): expected_datao=0, payload_size=4096 00:31:32.931 [2024-10-01 15:49:12.187790] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187813] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.187817] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.191904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.931 [2024-10-01 15:49:12.191914] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.931 [2024-10-01 15:49:12.191918] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.191922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36080) on tqpair=0xbc9060 00:31:32.931 [2024-10-01 15:49:12.191937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.191948] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:32.931 [2024-10-01 15:49:12.191964] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.191968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc9060) 00:31:32.931 [2024-10-01 15:49:12.191975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.931 [2024-10-01 15:49:12.191987] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36080, cid 4, qid 0 00:31:32.931 [2024-10-01 15:49:12.192221] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.931 [2024-10-01 15:49:12.192227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.931 [2024-10-01 15:49:12.192231] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.931 [2024-10-01 15:49:12.192234] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=4096, cccid=4 00:31:32.931 [2024-10-01 15:49:12.192239] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc36080) on tqpair(0xbc9060): expected_datao=0, payload_size=4096 00:31:32.932 [2024-10-01 15:49:12.192243] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.192256] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.192260] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.236907] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.932 [2024-10-01 15:49:12.236921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.932 [2024-10-01 15:49:12.236924] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.236929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36080) on tqpair=0xbc9060 00:31:32.932 [2024-10-01 15:49:12.236939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:32.932 [2024-10-01 15:49:12.236948] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:32.932 [2024-10-01 15:49:12.236960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:32.932 [2024-10-01 15:49:12.236967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:32.932 [2024-10-01 15:49:12.236972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:32.932 [2024-10-01 15:49:12.236978] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:32.932 [2024-10-01 15:49:12.236983] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:32.932 [2024-10-01 15:49:12.236988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:32.932 [2024-10-01 15:49:12.236994] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:32.932 [2024-10-01 15:49:12.237012] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237016] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237035] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237039] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.932 [2024-10-01 15:49:12.237064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36080, cid 4, qid 0 00:31:32.932 [2024-10-01 15:49:12.237070] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36200, cid 5, qid 0 00:31:32.932 [2024-10-01 15:49:12.237322] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.932 [2024-10-01 15:49:12.237328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.932 [2024-10-01 15:49:12.237332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36080) on tqpair=0xbc9060 00:31:32.932 [2024-10-01 15:49:12.237343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.932 [2024-10-01 15:49:12.237349] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.932 [2024-10-01 15:49:12.237352] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237356] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36200) on tqpair=0xbc9060 00:31:32.932 [2024-10-01 15:49:12.237365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237386] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36200, cid 5, qid 0 00:31:32.932 [2024-10-01 15:49:12.237607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.932 [2024-10-01 15:49:12.237613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.932 [2024-10-01 15:49:12.237617] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237621] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36200) on tqpair=0xbc9060 00:31:32.932 [2024-10-01 15:49:12.237630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237634] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237650] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36200, cid 5, qid 0 00:31:32.932 [2024-10-01 15:49:12.237727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.932 [2024-10-01 15:49:12.237733] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.932 [2024-10-01 15:49:12.237736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237740] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36200) on tqpair=0xbc9060 00:31:32.932 [2024-10-01 15:49:12.237750] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237753] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36200, cid 5, qid 0 00:31:32.932 [2024-10-01 15:49:12.237862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.932 [2024-10-01 15:49:12.237868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.932 [2024-10-01 15:49:12.237872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36200) on tqpair=0xbc9060 00:31:32.932 [2024-10-01 15:49:12.237900] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237926] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237939] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237943] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.237963] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbc9060) 00:31:32.932 [2024-10-01 15:49:12.237969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-10-01 15:49:12.237980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36200, cid 5, qid 0 00:31:32.932 [2024-10-01 15:49:12.237986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36080, cid 4, qid 0 00:31:32.932 [2024-10-01 15:49:12.237991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36380, cid 6, qid 0 00:31:32.932 [2024-10-01 15:49:12.237997] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36500, cid 7, qid 0 00:31:32.932 [2024-10-01 15:49:12.238381] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.932 [2024-10-01 15:49:12.238388] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.932 [2024-10-01 15:49:12.238391] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.238395] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=8192, cccid=5 00:31:32.932 [2024-10-01 15:49:12.238400] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc36200) on tqpair(0xbc9060): expected_datao=0, payload_size=8192 00:31:32.932 [2024-10-01 15:49:12.238405] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.238467] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.238472] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.932 [2024-10-01 15:49:12.238478] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.933 [2024-10-01 15:49:12.238484] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.933 [2024-10-01 15:49:12.238487] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238491] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=512, cccid=4 00:31:32.933 [2024-10-01 15:49:12.238495] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc36080) on tqpair(0xbc9060): expected_datao=0, payload_size=512 00:31:32.933 [2024-10-01 15:49:12.238500] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238520] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238524] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238530] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.933 [2024-10-01 15:49:12.238535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.933 [2024-10-01 15:49:12.238539] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238542] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=512, cccid=6 00:31:32.933 [2024-10-01 15:49:12.238547] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc36380) on tqpair(0xbc9060): expected_datao=0, payload_size=512 00:31:32.933 [2024-10-01 15:49:12.238554] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238560] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238564] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.933 [2024-10-01 15:49:12.238575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.933 [2024-10-01 15:49:12.238579] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238582] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc9060): datao=0, datal=4096, cccid=7 00:31:32.933 [2024-10-01 15:49:12.238587] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc36500) on tqpair(0xbc9060): expected_datao=0, payload_size=4096 00:31:32.933 [2024-10-01 15:49:12.238591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238598] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238603] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238612] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.933 [2024-10-01 15:49:12.238618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.933 [2024-10-01 15:49:12.238621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36200) on tqpair=0xbc9060 00:31:32.933 [2024-10-01 15:49:12.238638] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.933 [2024-10-01 15:49:12.238644] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.933 [2024-10-01 15:49:12.238647] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238651] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36080) on tqpair=0xbc9060 00:31:32.933 [2024-10-01 15:49:12.238663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.933 [2024-10-01 15:49:12.238669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.933 [2024-10-01 15:49:12.238673] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36380) on tqpair=0xbc9060 00:31:32.933 [2024-10-01 15:49:12.238683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.933 [2024-10-01 15:49:12.238689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.933 [2024-10-01 15:49:12.238693] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.933 [2024-10-01 15:49:12.238696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36500) on tqpair=0xbc9060 00:31:32.933 ===================================================== 00:31:32.933 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.933 ===================================================== 00:31:32.933 Controller Capabilities/Features 00:31:32.933 ================================ 00:31:32.933 Vendor ID: 8086 00:31:32.933 Subsystem Vendor ID: 8086 00:31:32.933 Serial Number: SPDK00000000000001 00:31:32.933 Model Number: SPDK bdev Controller 00:31:32.933 Firmware Version: 25.01 00:31:32.933 Recommended Arb Burst: 6 00:31:32.933 IEEE OUI Identifier: e4 d2 5c 00:31:32.933 Multi-path I/O 00:31:32.933 May have multiple subsystem ports: Yes 00:31:32.933 May have multiple controllers: Yes 00:31:32.933 Associated with SR-IOV VF: No 00:31:32.933 Max Data Transfer Size: 131072 00:31:32.933 Max Number of Namespaces: 32 00:31:32.933 Max Number of I/O Queues: 127 00:31:32.933 NVMe Specification Version (VS): 1.3 00:31:32.933 NVMe Specification Version (Identify): 1.3 00:31:32.933 Maximum Queue Entries: 128 00:31:32.933 Contiguous Queues Required: Yes 00:31:32.933 Arbitration Mechanisms Supported 00:31:32.933 Weighted Round Robin: Not Supported 00:31:32.933 Vendor Specific: Not Supported 00:31:32.933 Reset Timeout: 15000 ms 00:31:32.933 Doorbell Stride: 4 bytes 00:31:32.933 NVM Subsystem Reset: Not Supported 00:31:32.933 Command Sets Supported 00:31:32.933 NVM Command Set: Supported 00:31:32.933 Boot Partition: Not Supported 00:31:32.933 Memory Page Size Minimum: 4096 bytes 00:31:32.933 Memory Page Size Maximum: 4096 bytes 00:31:32.933 Persistent Memory Region: Not Supported 00:31:32.933 Optional Asynchronous Events Supported 00:31:32.933 Namespace Attribute Notices: Supported 00:31:32.933 Firmware Activation Notices: Not Supported 00:31:32.933 ANA Change Notices: Not Supported 00:31:32.933 PLE Aggregate Log Change Notices: Not Supported 00:31:32.933 LBA Status Info Alert Notices: Not Supported 00:31:32.933 EGE Aggregate Log Change Notices: Not Supported 00:31:32.933 Normal NVM Subsystem Shutdown event: Not Supported 00:31:32.933 Zone Descriptor Change Notices: Not Supported 00:31:32.933 Discovery Log Change Notices: Not Supported 00:31:32.933 Controller Attributes 00:31:32.933 128-bit Host Identifier: Supported 00:31:32.933 Non-Operational Permissive Mode: Not Supported 00:31:32.933 NVM Sets: Not Supported 00:31:32.933 Read Recovery Levels: Not Supported 00:31:32.933 Endurance Groups: Not Supported 00:31:32.933 Predictable Latency Mode: Not Supported 00:31:32.933 Traffic Based Keep ALive: Not Supported 00:31:32.933 Namespace Granularity: Not Supported 00:31:32.933 SQ Associations: Not Supported 00:31:32.933 UUID List: Not Supported 00:31:32.933 Multi-Domain Subsystem: Not Supported 00:31:32.933 Fixed Capacity Management: Not Supported 00:31:32.933 Variable Capacity Management: Not Supported 00:31:32.933 Delete Endurance Group: Not Supported 00:31:32.933 Delete NVM Set: Not Supported 00:31:32.933 Extended LBA Formats Supported: Not Supported 00:31:32.933 Flexible Data Placement Supported: Not Supported 00:31:32.933 00:31:32.933 Controller Memory Buffer Support 00:31:32.933 ================================ 00:31:32.933 Supported: No 00:31:32.933 00:31:32.933 Persistent Memory Region Support 00:31:32.933 ================================ 00:31:32.933 Supported: No 00:31:32.933 00:31:32.933 Admin Command Set Attributes 00:31:32.933 ============================ 00:31:32.933 Security Send/Receive: Not Supported 00:31:32.933 Format NVM: Not Supported 00:31:32.933 Firmware Activate/Download: Not Supported 00:31:32.933 Namespace Management: Not Supported 00:31:32.933 Device Self-Test: Not Supported 00:31:32.933 Directives: Not Supported 00:31:32.933 NVMe-MI: Not Supported 00:31:32.933 Virtualization Management: Not Supported 00:31:32.933 Doorbell Buffer Config: Not Supported 00:31:32.933 Get LBA Status Capability: Not Supported 00:31:32.933 Command & Feature Lockdown Capability: Not Supported 00:31:32.934 Abort Command Limit: 4 00:31:32.934 Async Event Request Limit: 4 00:31:32.934 Number of Firmware Slots: N/A 00:31:32.934 Firmware Slot 1 Read-Only: N/A 00:31:32.934 Firmware Activation Without Reset: N/A 00:31:32.934 Multiple Update Detection Support: N/A 00:31:32.934 Firmware Update Granularity: No Information Provided 00:31:32.934 Per-Namespace SMART Log: No 00:31:32.934 Asymmetric Namespace Access Log Page: Not Supported 00:31:32.934 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:32.934 Command Effects Log Page: Supported 00:31:32.934 Get Log Page Extended Data: Supported 00:31:32.934 Telemetry Log Pages: Not Supported 00:31:32.934 Persistent Event Log Pages: Not Supported 00:31:32.934 Supported Log Pages Log Page: May Support 00:31:32.934 Commands Supported & Effects Log Page: Not Supported 00:31:32.934 Feature Identifiers & Effects Log Page:May Support 00:31:32.934 NVMe-MI Commands & Effects Log Page: May Support 00:31:32.934 Data Area 4 for Telemetry Log: Not Supported 00:31:32.934 Error Log Page Entries Supported: 128 00:31:32.934 Keep Alive: Supported 00:31:32.934 Keep Alive Granularity: 10000 ms 00:31:32.934 00:31:32.934 NVM Command Set Attributes 00:31:32.934 ========================== 00:31:32.934 Submission Queue Entry Size 00:31:32.934 Max: 64 00:31:32.934 Min: 64 00:31:32.934 Completion Queue Entry Size 00:31:32.934 Max: 16 00:31:32.934 Min: 16 00:31:32.934 Number of Namespaces: 32 00:31:32.934 Compare Command: Supported 00:31:32.934 Write Uncorrectable Command: Not Supported 00:31:32.934 Dataset Management Command: Supported 00:31:32.934 Write Zeroes Command: Supported 00:31:32.934 Set Features Save Field: Not Supported 00:31:32.934 Reservations: Supported 00:31:32.934 Timestamp: Not Supported 00:31:32.934 Copy: Supported 00:31:32.934 Volatile Write Cache: Present 00:31:32.934 Atomic Write Unit (Normal): 1 00:31:32.934 Atomic Write Unit (PFail): 1 00:31:32.934 Atomic Compare & Write Unit: 1 00:31:32.934 Fused Compare & Write: Supported 00:31:32.934 Scatter-Gather List 00:31:32.934 SGL Command Set: Supported 00:31:32.934 SGL Keyed: Supported 00:31:32.934 SGL Bit Bucket Descriptor: Not Supported 00:31:32.934 SGL Metadata Pointer: Not Supported 00:31:32.934 Oversized SGL: Not Supported 00:31:32.934 SGL Metadata Address: Not Supported 00:31:32.934 SGL Offset: Supported 00:31:32.934 Transport SGL Data Block: Not Supported 00:31:32.934 Replay Protected Memory Block: Not Supported 00:31:32.934 00:31:32.934 Firmware Slot Information 00:31:32.934 ========================= 00:31:32.934 Active slot: 1 00:31:32.934 Slot 1 Firmware Revision: 25.01 00:31:32.934 00:31:32.934 00:31:32.934 Commands Supported and Effects 00:31:32.934 ============================== 00:31:32.934 Admin Commands 00:31:32.934 -------------- 00:31:32.934 Get Log Page (02h): Supported 00:31:32.934 Identify (06h): Supported 00:31:32.934 Abort (08h): Supported 00:31:32.934 Set Features (09h): Supported 00:31:32.934 Get Features (0Ah): Supported 00:31:32.934 Asynchronous Event Request (0Ch): Supported 00:31:32.934 Keep Alive (18h): Supported 00:31:32.934 I/O Commands 00:31:32.934 ------------ 00:31:32.934 Flush (00h): Supported LBA-Change 00:31:32.934 Write (01h): Supported LBA-Change 00:31:32.934 Read (02h): Supported 00:31:32.934 Compare (05h): Supported 00:31:32.934 Write Zeroes (08h): Supported LBA-Change 00:31:32.934 Dataset Management (09h): Supported LBA-Change 00:31:32.934 Copy (19h): Supported LBA-Change 00:31:32.934 00:31:32.934 Error Log 00:31:32.934 ========= 00:31:32.934 00:31:32.934 Arbitration 00:31:32.934 =========== 00:31:32.934 Arbitration Burst: 1 00:31:32.934 00:31:32.934 Power Management 00:31:32.934 ================ 00:31:32.934 Number of Power States: 1 00:31:32.934 Current Power State: Power State #0 00:31:32.934 Power State #0: 00:31:32.934 Max Power: 0.00 W 00:31:32.934 Non-Operational State: Operational 00:31:32.934 Entry Latency: Not Reported 00:31:32.934 Exit Latency: Not Reported 00:31:32.934 Relative Read Throughput: 0 00:31:32.934 Relative Read Latency: 0 00:31:32.934 Relative Write Throughput: 0 00:31:32.934 Relative Write Latency: 0 00:31:32.934 Idle Power: Not Reported 00:31:32.934 Active Power: Not Reported 00:31:32.934 Non-Operational Permissive Mode: Not Supported 00:31:32.934 00:31:32.934 Health Information 00:31:32.934 ================== 00:31:32.934 Critical Warnings: 00:31:32.934 Available Spare Space: OK 00:31:32.934 Temperature: OK 00:31:32.934 Device Reliability: OK 00:31:32.934 Read Only: No 00:31:32.934 Volatile Memory Backup: OK 00:31:32.934 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:32.934 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:32.934 Available Spare: 0% 00:31:32.934 Available Spare Threshold: 0% 00:31:32.934 Life Percentage Used:[2024-10-01 15:49:12.238800] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.934 [2024-10-01 15:49:12.238806] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbc9060) 00:31:32.934 [2024-10-01 15:49:12.238812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-10-01 15:49:12.238824] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc36500, cid 7, qid 0 00:31:32.934 [2024-10-01 15:49:12.238922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.934 [2024-10-01 15:49:12.238930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.934 [2024-10-01 15:49:12.238933] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.934 [2024-10-01 15:49:12.238937] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc36500) on tqpair=0xbc9060 00:31:32.934 [2024-10-01 15:49:12.238972] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:32.934 [2024-10-01 15:49:12.238982] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35a80) on tqpair=0xbc9060 00:31:32.934 [2024-10-01 15:49:12.238989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.934 [2024-10-01 15:49:12.238997] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35c00) on tqpair=0xbc9060 00:31:32.934 [2024-10-01 15:49:12.239002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.934 [2024-10-01 15:49:12.239007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35d80) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.239011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.935 [2024-10-01 15:49:12.239016] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.239021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.935 [2024-10-01 15:49:12.239030] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239034] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.239045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.239058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.239285] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.239291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.239294] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239298] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.239306] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239309] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239313] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.239320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.239334] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.239541] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.239548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.239551] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.239559] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:32.935 [2024-10-01 15:49:12.239564] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:32.935 [2024-10-01 15:49:12.239575] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239582] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.239589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.239599] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.239680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.239686] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.239690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.239706] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239710] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239714] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.239721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.239731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.239829] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.239835] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.239839] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239843] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.239853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.239867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.239877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.239952] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.239959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.239962] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239966] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.239976] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239980] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.239983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.239990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.240001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.240230] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.240237] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.240240] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.240244] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.240254] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.240258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.240261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.240268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.240278] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.240454] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.240460] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.240464] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.240468] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.935 [2024-10-01 15:49:12.240480] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.240484] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.240488] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.935 [2024-10-01 15:49:12.240495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.935 [2024-10-01 15:49:12.240505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.935 [2024-10-01 15:49:12.240580] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.935 [2024-10-01 15:49:12.240586] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.935 [2024-10-01 15:49:12.240590] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.935 [2024-10-01 15:49:12.240593] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.936 [2024-10-01 15:49:12.240603] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240611] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.936 [2024-10-01 15:49:12.240617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.936 [2024-10-01 15:49:12.240627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.936 [2024-10-01 15:49:12.240694] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.936 [2024-10-01 15:49:12.240702] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.936 [2024-10-01 15:49:12.240705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240709] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.936 [2024-10-01 15:49:12.240719] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240723] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240727] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.936 [2024-10-01 15:49:12.240734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.936 [2024-10-01 15:49:12.240744] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.936 [2024-10-01 15:49:12.240804] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.936 [2024-10-01 15:49:12.240810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.936 [2024-10-01 15:49:12.240813] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240817] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.936 [2024-10-01 15:49:12.240827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240830] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.240834] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.936 [2024-10-01 15:49:12.240841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.936 [2024-10-01 15:49:12.240851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.936 [2024-10-01 15:49:12.244905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.936 [2024-10-01 15:49:12.244914] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.936 [2024-10-01 15:49:12.244918] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.244922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.936 [2024-10-01 15:49:12.244932] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.244936] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.244943] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc9060) 00:31:32.936 [2024-10-01 15:49:12.244950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.936 [2024-10-01 15:49:12.244962] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc35f00, cid 3, qid 0 00:31:32.936 [2024-10-01 15:49:12.245236] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.936 [2024-10-01 15:49:12.245242] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.936 [2024-10-01 15:49:12.245246] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.936 [2024-10-01 15:49:12.245250] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc35f00) on tqpair=0xbc9060 00:31:32.936 [2024-10-01 15:49:12.245258] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:31:32.936 0% 00:31:32.936 Data Units Read: 0 00:31:32.936 Data Units Written: 0 00:31:32.936 Host Read Commands: 0 00:31:32.936 Host Write Commands: 0 00:31:32.936 Controller Busy Time: 0 minutes 00:31:32.936 Power Cycles: 0 00:31:32.936 Power On Hours: 0 hours 00:31:32.936 Unsafe Shutdowns: 0 00:31:32.936 Unrecoverable Media Errors: 0 00:31:32.936 Lifetime Error Log Entries: 0 00:31:32.936 Warning Temperature Time: 0 minutes 00:31:32.936 Critical Temperature Time: 0 minutes 00:31:32.936 00:31:32.936 Number of Queues 00:31:32.936 ================ 00:31:32.936 Number of I/O Submission Queues: 127 00:31:32.936 Number of I/O Completion Queues: 127 00:31:32.936 00:31:32.936 Active Namespaces 00:31:32.936 ================= 00:31:32.936 Namespace ID:1 00:31:32.936 Error Recovery Timeout: Unlimited 00:31:32.936 Command Set Identifier: NVM (00h) 00:31:32.936 Deallocate: Supported 00:31:32.936 Deallocated/Unwritten Error: Not Supported 00:31:32.936 Deallocated Read Value: Unknown 00:31:32.936 Deallocate in Write Zeroes: Not Supported 00:31:32.936 Deallocated Guard Field: 0xFFFF 00:31:32.936 Flush: Supported 00:31:32.936 Reservation: Supported 00:31:32.936 Namespace Sharing Capabilities: Multiple Controllers 00:31:32.936 Size (in LBAs): 131072 (0GiB) 00:31:32.936 Capacity (in LBAs): 131072 (0GiB) 00:31:32.936 Utilization (in LBAs): 131072 (0GiB) 00:31:32.936 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:32.936 EUI64: ABCDEF0123456789 00:31:32.936 UUID: d669c4cc-0e8a-41fa-a505-ea2628e47058 00:31:32.936 Thin Provisioning: Not Supported 00:31:32.936 Per-NS Atomic Units: Yes 00:31:32.936 Atomic Boundary Size (Normal): 0 00:31:32.936 Atomic Boundary Size (PFail): 0 00:31:32.936 Atomic Boundary Offset: 0 00:31:32.936 Maximum Single Source Range Length: 65535 00:31:32.936 Maximum Copy Length: 65535 00:31:32.936 Maximum Source Range Count: 1 00:31:32.936 NGUID/EUI64 Never Reused: No 00:31:32.936 Namespace Write Protected: No 00:31:32.936 Number of LBA Formats: 1 00:31:32.936 Current LBA Format: LBA Format #00 00:31:32.936 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:32.936 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.936 rmmod nvme_tcp 00:31:32.936 rmmod nvme_fabrics 00:31:32.936 rmmod nvme_keyring 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 3299720 ']' 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 3299720 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3299720 ']' 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3299720 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:32.936 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3299720 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3299720' 00:31:33.199 killing process with pid 3299720 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3299720 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3299720 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.199 15:49:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:35.746 00:31:35.746 real 0m11.198s 00:31:35.746 user 0m6.140s 00:31:35.746 sys 0m6.188s 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.746 ************************************ 00:31:35.746 END TEST nvmf_identify 00:31:35.746 ************************************ 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.746 ************************************ 00:31:35.746 START TEST nvmf_perf 00:31:35.746 ************************************ 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:35.746 * Looking for test storage... 00:31:35.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:35.746 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.747 --rc genhtml_branch_coverage=1 00:31:35.747 --rc genhtml_function_coverage=1 00:31:35.747 --rc genhtml_legend=1 00:31:35.747 --rc geninfo_all_blocks=1 00:31:35.747 --rc geninfo_unexecuted_blocks=1 00:31:35.747 00:31:35.747 ' 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.747 --rc genhtml_branch_coverage=1 00:31:35.747 --rc genhtml_function_coverage=1 00:31:35.747 --rc genhtml_legend=1 00:31:35.747 --rc geninfo_all_blocks=1 00:31:35.747 --rc geninfo_unexecuted_blocks=1 00:31:35.747 00:31:35.747 ' 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.747 --rc genhtml_branch_coverage=1 00:31:35.747 --rc genhtml_function_coverage=1 00:31:35.747 --rc genhtml_legend=1 00:31:35.747 --rc geninfo_all_blocks=1 00:31:35.747 --rc geninfo_unexecuted_blocks=1 00:31:35.747 00:31:35.747 ' 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.747 --rc genhtml_branch_coverage=1 00:31:35.747 --rc genhtml_function_coverage=1 00:31:35.747 --rc genhtml_legend=1 00:31:35.747 --rc geninfo_all_blocks=1 00:31:35.747 --rc geninfo_unexecuted_blocks=1 00:31:35.747 00:31:35.747 ' 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.747 15:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:35.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.747 15:49:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:43.891 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:43.891 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:43.892 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:43.892 Found net devices under 0000:31:00.0: cvl_0_0 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:43.892 Found net devices under 0000:31:00.1: cvl_0_1 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:31:43.892 00:31:43.892 --- 10.0.0.2 ping statistics --- 00:31:43.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.892 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:31:43.892 00:31:43.892 --- 10.0.0.1 ping statistics --- 00:31:43.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.892 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=3304136 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 3304136 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3304136 ']' 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:43.892 15:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.892 [2024-10-01 15:49:22.900426] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:31:43.892 [2024-10-01 15:49:22.900494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.892 [2024-10-01 15:49:22.943101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:43.892 [2024-10-01 15:49:22.993758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.892 [2024-10-01 15:49:23.061621] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.892 [2024-10-01 15:49:23.061697] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.892 [2024-10-01 15:49:23.061719] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.892 [2024-10-01 15:49:23.061730] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.892 [2024-10-01 15:49:23.061740] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.892 [2024-10-01 15:49:23.061940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.892 [2024-10-01 15:49:23.062052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.892 [2024-10-01 15:49:23.062213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.892 [2024-10-01 15:49:23.062216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:44.465 15:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:45.037 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:45.037 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:45.297 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:45.297 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:45.297 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:45.297 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:45.297 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:45.297 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:45.297 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:45.557 [2024-10-01 15:49:24.832615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.557 15:49:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:45.817 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:45.817 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:45.817 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:45.817 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:46.079 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.339 [2024-10-01 15:49:25.559162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.339 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:46.339 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:46.339 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:46.339 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:46.339 15:49:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:47.723 Initializing NVMe Controllers 00:31:47.723 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:47.723 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:47.723 Initialization complete. Launching workers. 00:31:47.723 ======================================================== 00:31:47.723 Latency(us) 00:31:47.723 Device Information : IOPS MiB/s Average min max 00:31:47.723 PCIE (0000:65:00.0) NSID 1 from core 0: 107687.19 420.65 296.79 11.39 4870.88 00:31:47.723 ======================================================== 00:31:47.723 Total : 107687.19 420.65 296.79 11.39 4870.88 00:31:47.723 00:31:47.723 15:49:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:49.105 Initializing NVMe Controllers 00:31:49.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:49.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:49.105 Initialization complete. Launching workers. 00:31:49.105 ======================================================== 00:31:49.105 Latency(us) 00:31:49.105 Device Information : IOPS MiB/s Average min max 00:31:49.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 58.96 0.23 17134.43 248.11 45956.18 00:31:49.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.96 0.24 17188.61 7060.10 48887.03 00:31:49.105 ======================================================== 00:31:49.105 Total : 119.92 0.47 17161.97 248.11 48887.03 00:31:49.105 00:31:49.105 15:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:50.488 Initializing NVMe Controllers 00:31:50.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:50.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:50.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:50.488 Initialization complete. Launching workers. 00:31:50.488 ======================================================== 00:31:50.488 Latency(us) 00:31:50.488 Device Information : IOPS MiB/s Average min max 00:31:50.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11911.00 46.53 2689.74 418.51 6191.05 00:31:50.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3826.00 14.95 8419.38 6903.62 15886.98 00:31:50.488 ======================================================== 00:31:50.488 Total : 15737.00 61.47 4082.74 418.51 15886.98 00:31:50.488 00:31:50.488 15:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:50.488 15:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:50.488 15:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:53.031 Initializing NVMe Controllers 00:31:53.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.031 Controller IO queue size 128, less than required. 00:31:53.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:53.031 Controller IO queue size 128, less than required. 00:31:53.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:53.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:53.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:53.031 Initialization complete. Launching workers. 00:31:53.031 ======================================================== 00:31:53.031 Latency(us) 00:31:53.031 Device Information : IOPS MiB/s Average min max 00:31:53.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1816.32 454.08 71151.89 40508.90 125030.65 00:31:53.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.92 154.23 216016.23 56980.12 308865.13 00:31:53.031 ======================================================== 00:31:53.031 Total : 2433.24 608.31 107880.57 40508.90 308865.13 00:31:53.031 00:31:53.031 15:49:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:53.031 No valid NVMe controllers or AIO or URING devices found 00:31:53.031 Initializing NVMe Controllers 00:31:53.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.031 Controller IO queue size 128, less than required. 00:31:53.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:53.031 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:53.031 Controller IO queue size 128, less than required. 00:31:53.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:53.031 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:53.031 WARNING: Some requested NVMe devices were skipped 00:31:53.031 15:49:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:55.576 Initializing NVMe Controllers 00:31:55.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.576 Controller IO queue size 128, less than required. 00:31:55.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:55.577 Controller IO queue size 128, less than required. 00:31:55.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:55.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:55.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:55.577 Initialization complete. Launching workers. 00:31:55.577 00:31:55.577 ==================== 00:31:55.577 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:55.577 TCP transport: 00:31:55.577 polls: 31625 00:31:55.577 idle_polls: 15163 00:31:55.577 sock_completions: 16462 00:31:55.577 nvme_completions: 8911 00:31:55.577 submitted_requests: 13436 00:31:55.577 queued_requests: 1 00:31:55.577 00:31:55.577 ==================== 00:31:55.577 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:55.577 TCP transport: 00:31:55.577 polls: 37374 00:31:55.577 idle_polls: 25712 00:31:55.577 sock_completions: 11662 00:31:55.577 nvme_completions: 6247 00:31:55.577 submitted_requests: 9400 00:31:55.577 queued_requests: 1 00:31:55.577 ======================================================== 00:31:55.577 Latency(us) 00:31:55.577 Device Information : IOPS MiB/s Average min max 00:31:55.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2224.92 556.23 58335.98 32947.60 101613.01 00:31:55.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1559.69 389.92 82736.78 29926.63 127734.98 00:31:55.577 ======================================================== 00:31:55.577 Total : 3784.61 946.15 68391.89 29926.63 127734.98 00:31:55.577 00:31:55.577 15:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:55.577 15:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.838 15:49:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:55.838 15:49:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:55.838 15:49:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9e725fd4-e6e5-4055-a87b-433aa22fa7f2 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9e725fd4-e6e5-4055-a87b-433aa22fa7f2 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=9e725fd4-e6e5-4055-a87b-433aa22fa7f2 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:57.224 { 00:31:57.224 "uuid": "9e725fd4-e6e5-4055-a87b-433aa22fa7f2", 00:31:57.224 "name": "lvs_0", 00:31:57.224 "base_bdev": "Nvme0n1", 00:31:57.224 "total_data_clusters": 457407, 00:31:57.224 "free_clusters": 457407, 00:31:57.224 "block_size": 512, 00:31:57.224 "cluster_size": 4194304 00:31:57.224 } 00:31:57.224 ]' 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9e725fd4-e6e5-4055-a87b-433aa22fa7f2") .free_clusters' 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9e725fd4-e6e5-4055-a87b-433aa22fa7f2") .cluster_size' 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:31:57.224 1829628 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:57.224 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e725fd4-e6e5-4055-a87b-433aa22fa7f2 lbd_0 20480 00:31:57.485 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=13b18c0a-048d-4123-af74-acb45de66e7d 00:31:57.485 15:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 13b18c0a-048d-4123-af74-acb45de66e7d lvs_n_0 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d4de7ba1-2c4b-4cd1-b0a5-27cea4e9502a 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d4de7ba1-2c4b-4cd1-b0a5-27cea4e9502a 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d4de7ba1-2c4b-4cd1-b0a5-27cea4e9502a 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:59.399 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:59.399 { 00:31:59.399 "uuid": "9e725fd4-e6e5-4055-a87b-433aa22fa7f2", 00:31:59.399 "name": "lvs_0", 00:31:59.399 "base_bdev": "Nvme0n1", 00:31:59.399 "total_data_clusters": 457407, 00:31:59.399 "free_clusters": 452287, 00:31:59.399 "block_size": 512, 00:31:59.399 "cluster_size": 4194304 00:31:59.399 }, 00:31:59.399 { 00:31:59.399 "uuid": "d4de7ba1-2c4b-4cd1-b0a5-27cea4e9502a", 00:31:59.400 "name": "lvs_n_0", 00:31:59.400 "base_bdev": "13b18c0a-048d-4123-af74-acb45de66e7d", 00:31:59.400 "total_data_clusters": 5114, 00:31:59.400 "free_clusters": 5114, 00:31:59.400 "block_size": 512, 00:31:59.400 "cluster_size": 4194304 00:31:59.400 } 00:31:59.400 ]' 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d4de7ba1-2c4b-4cd1-b0a5-27cea4e9502a") .free_clusters' 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d4de7ba1-2c4b-4cd1-b0a5-27cea4e9502a") .cluster_size' 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:59.400 20456 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d4de7ba1-2c4b-4cd1-b0a5-27cea4e9502a lbd_nest_0 20456 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=c12474ab-0143-40d2-be63-96bd22785226 00:31:59.400 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.662 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:59.662 15:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c12474ab-0143-40d2-be63-96bd22785226 00:31:59.925 15:49:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.925 15:49:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:59.925 15:49:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:59.925 15:49:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:59.925 15:49:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:59.925 15:49:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:12.165 Initializing NVMe Controllers 00:32:12.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:12.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:12.165 Initialization complete. Launching workers. 00:32:12.165 ======================================================== 00:32:12.165 Latency(us) 00:32:12.165 Device Information : IOPS MiB/s Average min max 00:32:12.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.20 0.02 22690.86 239.91 47888.91 00:32:12.165 ======================================================== 00:32:12.165 Total : 44.20 0.02 22690.86 239.91 47888.91 00:32:12.165 00:32:12.165 15:49:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:12.165 15:49:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.269 Initializing NVMe Controllers 00:32:22.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:22.269 Initialization complete. Launching workers. 00:32:22.269 ======================================================== 00:32:22.269 Latency(us) 00:32:22.269 Device Information : IOPS MiB/s Average min max 00:32:22.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.00 7.88 15944.78 4987.79 55869.38 00:32:22.269 ======================================================== 00:32:22.269 Total : 63.00 7.88 15944.78 4987.79 55869.38 00:32:22.269 00:32:22.269 15:49:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:22.269 15:49:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:22.269 15:49:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.264 Initializing NVMe Controllers 00:32:32.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:32.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:32.264 Initialization complete. Launching workers. 00:32:32.264 ======================================================== 00:32:32.264 Latency(us) 00:32:32.264 Device Information : IOPS MiB/s Average min max 00:32:32.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8871.80 4.33 3607.07 303.40 9369.54 00:32:32.264 ======================================================== 00:32:32.264 Total : 8871.80 4.33 3607.07 303.40 9369.54 00:32:32.264 00:32:32.264 15:50:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:32.264 15:50:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.259 Initializing NVMe Controllers 00:32:42.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:42.259 Initialization complete. Launching workers. 00:32:42.259 ======================================================== 00:32:42.259 Latency(us) 00:32:42.259 Device Information : IOPS MiB/s Average min max 00:32:42.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4244.90 530.61 7542.85 419.15 17814.17 00:32:42.259 ======================================================== 00:32:42.259 Total : 4244.90 530.61 7542.85 419.15 17814.17 00:32:42.259 00:32:42.259 15:50:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:42.259 15:50:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:42.259 15:50:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:52.251 Initializing NVMe Controllers 00:32:52.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:52.251 Controller IO queue size 128, less than required. 00:32:52.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:52.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:52.251 Initialization complete. Launching workers. 00:32:52.251 ======================================================== 00:32:52.251 Latency(us) 00:32:52.251 Device Information : IOPS MiB/s Average min max 00:32:52.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15745.80 7.69 8132.81 1347.18 22618.77 00:32:52.251 ======================================================== 00:32:52.251 Total : 15745.80 7.69 8132.81 1347.18 22618.77 00:32:52.251 00:32:52.251 15:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:52.251 15:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:02.249 Initializing NVMe Controllers 00:33:02.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:02.249 Controller IO queue size 128, less than required. 00:33:02.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:02.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:02.249 Initialization complete. Launching workers. 00:33:02.249 ======================================================== 00:33:02.249 Latency(us) 00:33:02.249 Device Information : IOPS MiB/s Average min max 00:33:02.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1199.30 149.91 106982.13 15286.91 233311.17 00:33:02.249 ======================================================== 00:33:02.249 Total : 1199.30 149.91 106982.13 15286.91 233311.17 00:33:02.249 00:33:02.249 15:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.249 15:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c12474ab-0143-40d2-be63-96bd22785226 00:33:04.162 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:04.162 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13b18c0a-048d-4123-af74-acb45de66e7d 00:33:04.162 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.423 rmmod nvme_tcp 00:33:04.423 rmmod nvme_fabrics 00:33:04.423 rmmod nvme_keyring 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 3304136 ']' 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 3304136 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3304136 ']' 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3304136 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3304136 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3304136' 00:33:04.423 killing process with pid 3304136 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3304136 00:33:04.423 15:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3304136 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.970 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.885 00:33:08.885 real 1m33.121s 00:33:08.885 user 5m26.697s 00:33:08.885 sys 0m16.530s 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:08.885 ************************************ 00:33:08.885 END TEST nvmf_perf 00:33:08.885 ************************************ 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.885 ************************************ 00:33:08.885 START TEST nvmf_fio_host 00:33:08.885 ************************************ 00:33:08.885 15:50:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:08.885 * Looking for test storage... 00:33:08.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.885 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.886 --rc genhtml_branch_coverage=1 00:33:08.886 --rc genhtml_function_coverage=1 00:33:08.886 --rc genhtml_legend=1 00:33:08.886 --rc geninfo_all_blocks=1 00:33:08.886 --rc geninfo_unexecuted_blocks=1 00:33:08.886 00:33:08.886 ' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.886 --rc genhtml_branch_coverage=1 00:33:08.886 --rc genhtml_function_coverage=1 00:33:08.886 --rc genhtml_legend=1 00:33:08.886 --rc geninfo_all_blocks=1 00:33:08.886 --rc geninfo_unexecuted_blocks=1 00:33:08.886 00:33:08.886 ' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.886 --rc genhtml_branch_coverage=1 00:33:08.886 --rc genhtml_function_coverage=1 00:33:08.886 --rc genhtml_legend=1 00:33:08.886 --rc geninfo_all_blocks=1 00:33:08.886 --rc geninfo_unexecuted_blocks=1 00:33:08.886 00:33:08.886 ' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.886 --rc genhtml_branch_coverage=1 00:33:08.886 --rc genhtml_function_coverage=1 00:33:08.886 --rc genhtml_legend=1 00:33:08.886 --rc geninfo_all_blocks=1 00:33:08.886 --rc geninfo_unexecuted_blocks=1 00:33:08.886 00:33:08.886 ' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.886 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:08.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.887 15:50:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:17.030 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:17.030 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:17.030 Found net devices under 0000:31:00.0: cvl_0_0 00:33:17.030 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:17.031 Found net devices under 0000:31:00.1: cvl_0_1 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:33:17.031 00:33:17.031 --- 10.0.0.2 ping statistics --- 00:33:17.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.031 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:33:17.031 00:33:17.031 --- 10.0.0.1 ping statistics --- 00:33:17.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.031 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3323970 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3323970 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3323970 ']' 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.031 15:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.031 [2024-10-01 15:50:56.023773] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:33:17.031 [2024-10-01 15:50:56.023845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.031 [2024-10-01 15:50:56.066052] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:17.031 [2024-10-01 15:50:56.116797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:17.031 [2024-10-01 15:50:56.165994] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.031 [2024-10-01 15:50:56.166049] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.031 [2024-10-01 15:50:56.166057] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.031 [2024-10-01 15:50:56.166069] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.031 [2024-10-01 15:50:56.166076] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.031 [2024-10-01 15:50:56.166223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.031 [2024-10-01 15:50:56.166379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:17.031 [2024-10-01 15:50:56.166539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.031 [2024-10-01 15:50:56.166539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:17.604 15:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.604 15:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:33:17.604 15:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:17.604 [2024-10-01 15:50:57.010736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.604 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:17.604 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:17.604 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.865 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:17.865 Malloc1 00:33:17.865 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.125 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:18.387 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.648 [2024-10-01 15:50:57.871600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.648 15:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:18.909 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:18.910 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:18.910 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:18.910 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:18.910 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:18.910 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:18.910 15:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:19.171 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:19.171 fio-3.35 00:33:19.171 Starting 1 thread 00:33:21.738 00:33:21.738 test: (groupid=0, jobs=1): err= 0: pid=3324779: Tue Oct 1 15:51:00 2024 00:33:21.738 read: IOPS=13.1k, BW=51.0MiB/s (53.5MB/s)(102MiB/2004msec) 00:33:21.738 slat (usec): min=2, max=286, avg= 2.16, stdev= 2.50 00:33:21.738 clat (usec): min=3767, max=9016, avg=5380.27, stdev=815.97 00:33:21.738 lat (usec): min=3812, max=9018, avg=5382.43, stdev=816.03 00:33:21.738 clat percentiles (usec): 00:33:21.738 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:33:21.738 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5276], 00:33:21.738 | 70.00th=[ 5407], 80.00th=[ 5538], 90.00th=[ 6128], 95.00th=[ 7570], 00:33:21.738 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[ 8848], 99.95th=[ 8848], 00:33:21.738 | 99.99th=[ 8979] 00:33:21.738 bw ( KiB/s): min=42568, max=55792, per=99.95%, avg=52214.00, stdev=6435.94, samples=4 00:33:21.738 iops : min=10642, max=13948, avg=13053.50, stdev=1608.99, samples=4 00:33:21.738 write: IOPS=13.1k, BW=51.0MiB/s (53.5MB/s)(102MiB/2004msec); 0 zone resets 00:33:21.738 slat (usec): min=2, max=270, avg= 2.25, stdev= 1.92 00:33:21.738 clat (usec): min=2959, max=8285, avg=4353.37, stdev=675.56 00:33:21.738 lat (usec): min=2976, max=8287, avg=4355.62, stdev=675.67 00:33:21.738 clat percentiles (usec): 00:33:21.738 | 1.00th=[ 3523], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3949], 00:33:21.739 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:33:21.739 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5080], 95.00th=[ 6194], 00:33:21.739 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7177], 99.95th=[ 7242], 00:33:21.739 | 99.99th=[ 8160] 00:33:21.739 bw ( KiB/s): min=42952, max=55816, per=99.97%, avg=52226.00, stdev=6208.39, samples=4 00:33:21.739 iops : min=10738, max=13954, avg=13056.50, stdev=1552.10, samples=4 00:33:21.739 lat (msec) : 4=13.06%, 10=86.94% 00:33:21.739 cpu : usr=73.49%, sys=25.11%, ctx=23, majf=0, minf=20 00:33:21.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:21.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:21.739 issued rwts: total=26173,26174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:21.739 00:33:21.739 Run status group 0 (all jobs): 00:33:21.739 READ: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=102MiB (107MB), run=2004-2004msec 00:33:21.739 WRITE: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=102MiB (107MB), run=2004-2004msec 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:21.739 15:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:21.999 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:21.999 fio-3.35 00:33:21.999 Starting 1 thread 00:33:24.546 00:33:24.546 test: (groupid=0, jobs=1): err= 0: pid=3325403: Tue Oct 1 15:51:03 2024 00:33:24.546 read: IOPS=9498, BW=148MiB/s (156MB/s)(297MiB/2004msec) 00:33:24.546 slat (usec): min=3, max=112, avg= 3.59, stdev= 1.58 00:33:24.546 clat (usec): min=980, max=14064, avg=8132.99, stdev=1886.37 00:33:24.546 lat (usec): min=984, max=14067, avg=8136.59, stdev=1886.49 00:33:24.546 clat percentiles (usec): 00:33:24.546 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6390], 00:33:24.546 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8586], 00:33:24.546 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:33:24.546 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13566], 99.95th=[13829], 00:33:24.546 | 99.99th=[14091] 00:33:24.546 bw ( KiB/s): min=70432, max=82171, per=50.00%, avg=75990.75, stdev=4850.03, samples=4 00:33:24.546 iops : min= 4402, max= 5135, avg=4749.25, stdev=302.83, samples=4 00:33:24.546 write: IOPS=5481, BW=85.6MiB/s (89.8MB/s)(156MiB/1816msec); 0 zone resets 00:33:24.546 slat (usec): min=39, max=326, avg=40.88, stdev= 6.77 00:33:24.546 clat (usec): min=2937, max=14145, avg=9167.52, stdev=1364.17 00:33:24.546 lat (usec): min=2977, max=14185, avg=9208.40, stdev=1365.55 00:33:24.546 clat percentiles (usec): 00:33:24.546 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 8029], 00:33:24.546 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:33:24.546 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:33:24.546 | 99.00th=[12649], 99.50th=[13304], 99.90th=[13698], 99.95th=[13829], 00:33:24.546 | 99.99th=[14091] 00:33:24.546 bw ( KiB/s): min=73344, max=85652, per=90.17%, avg=79077.00, stdev=5189.28, samples=4 00:33:24.546 iops : min= 4584, max= 5353, avg=4942.25, stdev=324.22, samples=4 00:33:24.546 lat (usec) : 1000=0.01% 00:33:24.546 lat (msec) : 4=0.47%, 10=79.61%, 20=19.92% 00:33:24.546 cpu : usr=83.82%, sys=14.63%, ctx=18, majf=0, minf=34 00:33:24.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:24.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:24.546 issued rwts: total=19034,9954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:24.546 00:33:24.546 Run status group 0 (all jobs): 00:33:24.546 READ: bw=148MiB/s (156MB/s), 148MiB/s-148MiB/s (156MB/s-156MB/s), io=297MiB (312MB), run=2004-2004msec 00:33:24.546 WRITE: bw=85.6MiB/s (89.8MB/s), 85.6MiB/s-85.6MiB/s (89.8MB/s-89.8MB/s), io=156MiB (163MB), run=1816-1816msec 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:24.546 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:24.547 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:33:24.547 15:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:24.807 Nvme0n1 00:33:25.068 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:25.639 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=5920b6f5-30e9-426b-80da-974268a8ee63 00:33:25.639 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 5920b6f5-30e9-426b-80da-974268a8ee63 00:33:25.639 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=5920b6f5-30e9-426b-80da-974268a8ee63 00:33:25.639 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:25.639 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:25.639 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:25.639 15:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:25.639 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:25.639 { 00:33:25.639 "uuid": "5920b6f5-30e9-426b-80da-974268a8ee63", 00:33:25.639 "name": "lvs_0", 00:33:25.639 "base_bdev": "Nvme0n1", 00:33:25.639 "total_data_clusters": 1787, 00:33:25.639 "free_clusters": 1787, 00:33:25.639 "block_size": 512, 00:33:25.639 "cluster_size": 1073741824 00:33:25.639 } 00:33:25.639 ]' 00:33:25.639 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5920b6f5-30e9-426b-80da-974268a8ee63") .free_clusters' 00:33:25.639 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:33:25.639 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5920b6f5-30e9-426b-80da-974268a8ee63") .cluster_size' 00:33:25.900 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:25.900 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:33:25.900 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:33:25.900 1829888 00:33:25.900 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:25.900 563d87d7-8383-4ccf-ba7b-e04c0550da0d 00:33:25.900 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:26.159 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.419 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:26.700 15:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:26.961 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:26.961 fio-3.35 00:33:26.961 Starting 1 thread 00:33:29.503 00:33:29.503 test: (groupid=0, jobs=1): err= 0: pid=3326631: Tue Oct 1 15:51:08 2024 00:33:29.503 read: IOPS=10.4k, BW=40.6MiB/s (42.5MB/s)(81.3MiB/2005msec) 00:33:29.503 slat (usec): min=2, max=127, avg= 2.22, stdev= 1.22 00:33:29.503 clat (usec): min=2490, max=10758, avg=6802.45, stdev=507.97 00:33:29.503 lat (usec): min=2507, max=10760, avg=6804.67, stdev=507.91 00:33:29.503 clat percentiles (usec): 00:33:29.503 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:33:29.503 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:33:29.503 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7570], 00:33:29.503 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[10421], 00:33:29.503 | 99.99th=[10683] 00:33:29.503 bw ( KiB/s): min=40448, max=42112, per=99.86%, avg=41486.00, stdev=722.44, samples=4 00:33:29.503 iops : min=10112, max=10528, avg=10371.50, stdev=180.61, samples=4 00:33:29.503 write: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2005msec); 0 zone resets 00:33:29.503 slat (nsec): min=2101, max=111871, avg=2282.20, stdev=808.41 00:33:29.503 clat (usec): min=1083, max=9466, avg=5442.01, stdev=434.52 00:33:29.503 lat (usec): min=1091, max=9469, avg=5444.29, stdev=434.50 00:33:29.503 clat percentiles (usec): 00:33:29.503 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5080], 00:33:29.503 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:33:29.503 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6128], 00:33:29.503 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8586], 00:33:29.503 | 99.99th=[ 9372] 00:33:29.503 bw ( KiB/s): min=40976, max=42024, per=100.00%, avg=41566.00, stdev=435.36, samples=4 00:33:29.503 iops : min=10244, max=10506, avg=10391.50, stdev=108.84, samples=4 00:33:29.503 lat (msec) : 2=0.02%, 4=0.11%, 10=99.82%, 20=0.05% 00:33:29.503 cpu : usr=71.61%, sys=27.35%, ctx=52, majf=0, minf=29 00:33:29.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:29.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:29.503 issued rwts: total=20823,20830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:29.503 00:33:29.503 Run status group 0 (all jobs): 00:33:29.504 READ: bw=40.6MiB/s (42.5MB/s), 40.6MiB/s-40.6MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.3MB), run=2005-2005msec 00:33:29.504 WRITE: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.3MB), run=2005-2005msec 00:33:29.504 15:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:29.504 15:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:30.476 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=5dbe8d16-6149-4a5c-9269-6ee4f0bbf443 00:33:30.476 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 5dbe8d16-6149-4a5c-9269-6ee4f0bbf443 00:33:30.476 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=5dbe8d16-6149-4a5c-9269-6ee4f0bbf443 00:33:30.476 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:30.476 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:30.476 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:30.476 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:30.477 { 00:33:30.477 "uuid": "5920b6f5-30e9-426b-80da-974268a8ee63", 00:33:30.477 "name": "lvs_0", 00:33:30.477 "base_bdev": "Nvme0n1", 00:33:30.477 "total_data_clusters": 1787, 00:33:30.477 "free_clusters": 0, 00:33:30.477 "block_size": 512, 00:33:30.477 "cluster_size": 1073741824 00:33:30.477 }, 00:33:30.477 { 00:33:30.477 "uuid": "5dbe8d16-6149-4a5c-9269-6ee4f0bbf443", 00:33:30.477 "name": "lvs_n_0", 00:33:30.477 "base_bdev": "563d87d7-8383-4ccf-ba7b-e04c0550da0d", 00:33:30.477 "total_data_clusters": 457025, 00:33:30.477 "free_clusters": 457025, 00:33:30.477 "block_size": 512, 00:33:30.477 "cluster_size": 4194304 00:33:30.477 } 00:33:30.477 ]' 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5dbe8d16-6149-4a5c-9269-6ee4f0bbf443") .free_clusters' 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5dbe8d16-6149-4a5c-9269-6ee4f0bbf443") .cluster_size' 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:33:30.477 1828100 00:33:30.477 15:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:31.417 0f5dd58c-df79-468b-a210-8598158de39f 00:33:31.417 15:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:31.417 15:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:31.678 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:31.938 15:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:32.197 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:32.197 fio-3.35 00:33:32.197 Starting 1 thread 00:33:34.741 00:33:34.741 test: (groupid=0, jobs=1): err= 0: pid=3328253: Tue Oct 1 15:51:13 2024 00:33:34.741 read: IOPS=9215, BW=36.0MiB/s (37.7MB/s)(72.2MiB/2005msec) 00:33:34.741 slat (usec): min=2, max=109, avg= 2.21, stdev= 1.08 00:33:34.741 clat (usec): min=2400, max=12435, avg=7667.57, stdev=588.30 00:33:34.741 lat (usec): min=2415, max=12437, avg=7669.78, stdev=588.23 00:33:34.741 clat percentiles (usec): 00:33:34.741 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7177], 00:33:34.741 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 00:33:34.741 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:33:34.741 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11731], 99.95th=[12125], 00:33:34.741 | 99.99th=[12387] 00:33:34.741 bw ( KiB/s): min=35560, max=37416, per=99.84%, avg=36804.00, stdev=845.95, samples=4 00:33:34.741 iops : min= 8890, max= 9354, avg=9201.00, stdev=211.49, samples=4 00:33:34.741 write: IOPS=9220, BW=36.0MiB/s (37.8MB/s)(72.2MiB/2005msec); 0 zone resets 00:33:34.741 slat (nsec): min=2090, max=95815, avg=2274.07, stdev=762.47 00:33:34.741 clat (usec): min=1093, max=11093, avg=6102.56, stdev=501.89 00:33:34.741 lat (usec): min=1101, max=11095, avg=6104.83, stdev=501.87 00:33:34.741 clat percentiles (usec): 00:33:34.741 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:33:34.741 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:33:34.741 | 70.00th=[ 6325], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:33:34.741 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[10028], 00:33:34.741 | 99.99th=[11076] 00:33:34.741 bw ( KiB/s): min=36328, max=37184, per=99.97%, avg=36874.00, stdev=389.37, samples=4 00:33:34.741 iops : min= 9082, max= 9296, avg=9218.50, stdev=97.34, samples=4 00:33:34.741 lat (msec) : 2=0.01%, 4=0.10%, 10=99.76%, 20=0.12% 00:33:34.741 cpu : usr=71.31%, sys=27.84%, ctx=42, majf=0, minf=29 00:33:34.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:34.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:34.741 issued rwts: total=18478,18488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:34.742 00:33:34.742 Run status group 0 (all jobs): 00:33:34.742 READ: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.2MiB (75.7MB), run=2005-2005msec 00:33:34.742 WRITE: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.2MiB (75.7MB), run=2005-2005msec 00:33:34.742 15:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:34.742 15:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:34.742 15:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:36.655 15:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:36.916 15:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:37.488 15:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:37.488 15:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:40.033 15:51:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:40.033 rmmod nvme_tcp 00:33:40.033 rmmod nvme_fabrics 00:33:40.033 rmmod nvme_keyring 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 3323970 ']' 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 3323970 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3323970 ']' 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3323970 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3323970 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3323970' 00:33:40.033 killing process with pid 3323970 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3323970 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3323970 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.033 15:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.946 00:33:41.946 real 0m33.310s 00:33:41.946 user 2m36.152s 00:33:41.946 sys 0m10.373s 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.946 ************************************ 00:33:41.946 END TEST nvmf_fio_host 00:33:41.946 ************************************ 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.946 ************************************ 00:33:41.946 START TEST nvmf_failover 00:33:41.946 ************************************ 00:33:41.946 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:42.208 * Looking for test storage... 00:33:42.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.208 --rc genhtml_branch_coverage=1 00:33:42.208 --rc genhtml_function_coverage=1 00:33:42.208 --rc genhtml_legend=1 00:33:42.208 --rc geninfo_all_blocks=1 00:33:42.208 --rc geninfo_unexecuted_blocks=1 00:33:42.208 00:33:42.208 ' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.208 --rc genhtml_branch_coverage=1 00:33:42.208 --rc genhtml_function_coverage=1 00:33:42.208 --rc genhtml_legend=1 00:33:42.208 --rc geninfo_all_blocks=1 00:33:42.208 --rc geninfo_unexecuted_blocks=1 00:33:42.208 00:33:42.208 ' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.208 --rc genhtml_branch_coverage=1 00:33:42.208 --rc genhtml_function_coverage=1 00:33:42.208 --rc genhtml_legend=1 00:33:42.208 --rc geninfo_all_blocks=1 00:33:42.208 --rc geninfo_unexecuted_blocks=1 00:33:42.208 00:33:42.208 ' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.208 --rc genhtml_branch_coverage=1 00:33:42.208 --rc genhtml_function_coverage=1 00:33:42.208 --rc genhtml_legend=1 00:33:42.208 --rc geninfo_all_blocks=1 00:33:42.208 --rc geninfo_unexecuted_blocks=1 00:33:42.208 00:33:42.208 ' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.208 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:42.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:42.209 15:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:50.356 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:50.356 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:50.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:50.357 Found net devices under 0000:31:00.0: cvl_0_0 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:50.357 Found net devices under 0000:31:00.1: cvl_0_1 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.357 15:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:50.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:33:50.357 00:33:50.357 --- 10.0.0.2 ping statistics --- 00:33:50.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.357 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:33:50.357 00:33:50.357 --- 10.0.0.1 ping statistics --- 00:33:50.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.357 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=3333970 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 3333970 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3333970 ']' 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:50.357 15:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.357 [2024-10-01 15:51:29.410407] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:33:50.357 [2024-10-01 15:51:29.410472] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.357 [2024-10-01 15:51:29.451913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:50.357 [2024-10-01 15:51:29.501452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:50.357 [2024-10-01 15:51:29.548570] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.357 [2024-10-01 15:51:29.548622] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.357 [2024-10-01 15:51:29.548630] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.357 [2024-10-01 15:51:29.548637] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.357 [2024-10-01 15:51:29.548643] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.357 [2024-10-01 15:51:29.548798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.357 [2024-10-01 15:51:29.548996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:50.357 [2024-10-01 15:51:29.549169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.928 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:50.928 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:50.928 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:50.928 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.928 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.928 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.928 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:51.188 [2024-10-01 15:51:30.441580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.188 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:51.449 Malloc0 00:33:51.449 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:51.710 15:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.710 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.971 [2024-10-01 15:51:31.277432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.971 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:52.232 [2024-10-01 15:51:31.474013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:52.232 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:52.232 [2024-10-01 15:51:31.670703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3334337 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3334337 /var/tmp/bdevperf.sock 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3334337 ']' 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:52.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:52.492 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:52.752 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:52.752 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:52.752 15:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:53.011 NVMe0n1 00:33:53.011 15:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:53.272 00:33:53.272 15:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:53.272 15:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3334531 00:33:53.272 15:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:54.211 15:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.471 [2024-10-01 15:51:33.800980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc049c0 is same with the state(6) to be set 00:33:54.471 [2024-10-01 15:51:33.801023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc049c0 is same with the state(6) to be set 00:33:54.471 [2024-10-01 15:51:33.801035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc049c0 is same with the state(6) to be set 00:33:54.471 [2024-10-01 15:51:33.801040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc049c0 is same with the state(6) to be set 00:33:54.471 [2024-10-01 15:51:33.801045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc049c0 is same with the state(6) to be set 00:33:54.471 15:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:57.766 15:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:58.026 00:33:58.026 15:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:58.026 [2024-10-01 15:51:37.430726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 [2024-10-01 15:51:37.430761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 [2024-10-01 15:51:37.430767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 [2024-10-01 15:51:37.430772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 [2024-10-01 15:51:37.430777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 [2024-10-01 15:51:37.430782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 [2024-10-01 15:51:37.430786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 [2024-10-01 15:51:37.430791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05810 is same with the state(6) to be set 00:33:58.026 15:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:01.322 15:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.322 [2024-10-01 15:51:40.618649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.322 15:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:02.264 15:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:02.525 [2024-10-01 15:51:41.816583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 [2024-10-01 15:51:41.816753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06710 is same with the state(6) to be set 00:34:02.525 15:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3334531 00:34:09.116 { 00:34:09.116 "results": [ 00:34:09.116 { 00:34:09.116 "job": "NVMe0n1", 00:34:09.116 "core_mask": "0x1", 00:34:09.116 "workload": "verify", 00:34:09.116 "status": "finished", 00:34:09.116 "verify_range": { 00:34:09.116 "start": 0, 00:34:09.116 "length": 16384 00:34:09.116 }, 00:34:09.116 "queue_depth": 128, 00:34:09.116 "io_size": 4096, 00:34:09.116 "runtime": 15.006577, 00:34:09.116 "iops": 12340.722337945555, 00:34:09.116 "mibps": 48.205946632599826, 00:34:09.116 "io_failed": 9405, 00:34:09.116 "io_timeout": 0, 00:34:09.116 "avg_latency_us": 9849.207283017382, 00:34:09.116 "min_latency_us": 539.3066666666666, 00:34:09.116 "max_latency_us": 17367.04 00:34:09.116 } 00:34:09.116 ], 00:34:09.116 "core_count": 1 00:34:09.116 } 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3334337 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3334337 ']' 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3334337 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3334337 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3334337' 00:34:09.116 killing process with pid 3334337 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3334337 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3334337 00:34:09.116 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:09.116 [2024-10-01 15:51:31.751092] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:34:09.116 [2024-10-01 15:51:31.751171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334337 ] 00:34:09.116 [2024-10-01 15:51:31.785589] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:09.116 [2024-10-01 15:51:31.834401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.116 [2024-10-01 15:51:31.866542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.116 Running I/O for 15 seconds... 00:34:09.116 11224.00 IOPS, 43.84 MiB/s [2024-10-01 15:51:33.802413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.116 [2024-10-01 15:51:33.802449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.116 [2024-10-01 15:51:33.802461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.116 [2024-10-01 15:51:33.802470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.116 [2024-10-01 15:51:33.802479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.116 [2024-10-01 15:51:33.802489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.116 [2024-10-01 15:51:33.802498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.116 [2024-10-01 15:51:33.802505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b4dc0 is same with the state(6) to be set 00:34:09.117 [2024-10-01 15:51:33.802571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.117 [2024-10-01 15:51:33.802581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.802988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.802996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.117 [2024-10-01 15:51:33.803241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.117 [2024-10-01 15:51:33.803258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.117 [2024-10-01 15:51:33.803268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.118 [2024-10-01 15:51:33.803275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.118 [2024-10-01 15:51:33.803292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.118 [2024-10-01 15:51:33.803310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.118 [2024-10-01 15:51:33.803327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.118 [2024-10-01 15:51:33.803343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.118 [2024-10-01 15:51:33.803360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.118 [2024-10-01 15:51:33.803924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.118 [2024-10-01 15:51:33.803933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.803942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.803951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.803958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.803967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.803975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.803984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.803991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.119 [2024-10-01 15:51:33.804582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.119 [2024-10-01 15:51:33.804589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:33.804796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.120 [2024-10-01 15:51:33.804823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.120 [2024-10-01 15:51:33.804831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:34:09.120 [2024-10-01 15:51:33.804839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:33.804877] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8d5380 was disconnected and freed. reset controller. 00:34:09.120 [2024-10-01 15:51:33.804887] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:09.120 [2024-10-01 15:51:33.804907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.120 [2024-10-01 15:51:33.808391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.120 [2024-10-01 15:51:33.808414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b4dc0 (9): Bad file descriptor 00:34:09.120 [2024-10-01 15:51:33.849515] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:09.120 11114.50 IOPS, 43.42 MiB/s 11111.67 IOPS, 43.40 MiB/s 11180.50 IOPS, 43.67 MiB/s [2024-10-01 15:51:37.431100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.120 [2024-10-01 15:51:37.431385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.120 [2024-10-01 15:51:37.431390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.121 [2024-10-01 15:51:37.431401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.121 [2024-10-01 15:51:37.431413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.121 [2024-10-01 15:51:37.431424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.121 [2024-10-01 15:51:37.431436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.121 [2024-10-01 15:51:37.431750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.121 [2024-10-01 15:51:37.431757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.431992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.431998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.122 [2024-10-01 15:51:37.432238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.122 [2024-10-01 15:51:37.432243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.123 [2024-10-01 15:51:37.432596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.123 [2024-10-01 15:51:37.432677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.123 [2024-10-01 15:51:37.432699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.123 [2024-10-01 15:51:37.432704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47856 len:8 PRP1 0x0 PRP2 0x0 00:34:09.123 [2024-10-01 15:51:37.432710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.123 [2024-10-01 15:51:37.432740] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8d7530 was disconnected and freed. reset controller. 00:34:09.123 [2024-10-01 15:51:37.432749] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:09.123 [2024-10-01 15:51:37.432765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.124 [2024-10-01 15:51:37.432770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:37.432777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.124 [2024-10-01 15:51:37.432782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:37.432789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.124 [2024-10-01 15:51:37.432795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:37.432801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.124 [2024-10-01 15:51:37.432806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:37.432811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.124 [2024-10-01 15:51:37.435267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.124 [2024-10-01 15:51:37.435287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b4dc0 (9): Bad file descriptor 00:34:09.124 [2024-10-01 15:51:37.467421] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:09.124 11435.20 IOPS, 44.67 MiB/s 11692.33 IOPS, 45.67 MiB/s 11856.29 IOPS, 46.31 MiB/s 11973.38 IOPS, 46.77 MiB/s 12082.89 IOPS, 47.20 MiB/s [2024-10-01 15:51:41.818269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.124 [2024-10-01 15:51:41.818677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.124 [2024-10-01 15:51:41.818688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.124 [2024-10-01 15:51:41.818695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.818991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.818996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.125 [2024-10-01 15:51:41.819119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.125 [2024-10-01 15:51:41.819125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.126 [2024-10-01 15:51:41.819411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115728 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115736 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115744 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115752 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115760 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115768 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115776 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115784 len:8 PRP1 0x0 PRP2 0x0 00:34:09.126 [2024-10-01 15:51:41.819581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.126 [2024-10-01 15:51:41.819586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.126 [2024-10-01 15:51:41.819590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.126 [2024-10-01 15:51:41.819595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115792 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115800 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115808 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115816 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115824 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115832 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115840 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115848 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115856 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115864 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115872 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115880 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115888 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115896 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115904 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115912 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115920 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115928 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115936 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115944 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.819974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.819978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.819982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115952 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.819987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.830552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.830574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.830582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115960 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.830591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.830596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.127 [2024-10-01 15:51:41.830600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.127 [2024-10-01 15:51:41.830605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115968 len:8 PRP1 0x0 PRP2 0x0 00:34:09.127 [2024-10-01 15:51:41.830610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.127 [2024-10-01 15:51:41.830616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.128 [2024-10-01 15:51:41.830620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.128 [2024-10-01 15:51:41.830624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115976 len:8 PRP1 0x0 PRP2 0x0 00:34:09.128 [2024-10-01 15:51:41.830634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.128 [2024-10-01 15:51:41.830643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.128 [2024-10-01 15:51:41.830648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115984 len:8 PRP1 0x0 PRP2 0x0 00:34:09.128 [2024-10-01 15:51:41.830654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.128 [2024-10-01 15:51:41.830663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.128 [2024-10-01 15:51:41.830668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115992 len:8 PRP1 0x0 PRP2 0x0 00:34:09.128 [2024-10-01 15:51:41.830673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.128 [2024-10-01 15:51:41.830682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.128 [2024-10-01 15:51:41.830686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116000 len:8 PRP1 0x0 PRP2 0x0 00:34:09.128 [2024-10-01 15:51:41.830692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.128 [2024-10-01 15:51:41.830702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.128 [2024-10-01 15:51:41.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116008 len:8 PRP1 0x0 PRP2 0x0 00:34:09.128 [2024-10-01 15:51:41.830712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.128 [2024-10-01 15:51:41.830722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.128 [2024-10-01 15:51:41.830727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116016 len:8 PRP1 0x0 PRP2 0x0 00:34:09.128 [2024-10-01 15:51:41.830732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830766] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8d7f50 was disconnected and freed. reset controller. 00:34:09.128 [2024-10-01 15:51:41.830775] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:09.128 [2024-10-01 15:51:41.830799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.128 [2024-10-01 15:51:41.830806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.128 [2024-10-01 15:51:41.830819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.128 [2024-10-01 15:51:41.830830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.128 [2024-10-01 15:51:41.830843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.128 [2024-10-01 15:51:41.830849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.128 [2024-10-01 15:51:41.830872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b4dc0 (9): Bad file descriptor 00:34:09.128 [2024-10-01 15:51:41.833302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.128 [2024-10-01 15:51:41.951252] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:09.128 12004.80 IOPS, 46.89 MiB/s 12093.55 IOPS, 47.24 MiB/s 12174.92 IOPS, 47.56 MiB/s 12228.08 IOPS, 47.77 MiB/s 12293.71 IOPS, 48.02 MiB/s 12341.73 IOPS, 48.21 MiB/s 00:34:09.128 Latency(us) 00:34:09.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.128 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:09.128 Verification LBA range: start 0x0 length 0x4000 00:34:09.128 NVMe0n1 : 15.01 12340.72 48.21 626.73 0.00 9849.21 539.31 17367.04 00:34:09.128 =================================================================================================================== 00:34:09.128 Total : 12340.72 48.21 626.73 0.00 9849.21 539.31 17367.04 00:34:09.128 Received shutdown signal, test time was about 15.000000 seconds 00:34:09.128 00:34:09.128 Latency(us) 00:34:09.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.128 =================================================================================================================== 00:34:09.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3337362 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3337362 /var/tmp/bdevperf.sock 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3337362 ']' 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:09.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:09.128 15:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:09.389 15:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:09.389 15:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:09.389 15:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:09.650 [2024-10-01 15:51:48.990137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:09.650 15:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:09.910 [2024-10-01 15:51:49.174641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:09.910 15:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:10.231 NVMe0n1 00:34:10.231 15:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:10.587 00:34:10.587 15:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:10.875 00:34:10.875 15:51:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:10.875 15:51:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:10.875 15:51:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:11.164 15:51:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:14.462 15:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:14.462 15:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:14.462 15:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3338428 00:34:14.462 15:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:14.462 15:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3338428 00:34:15.404 { 00:34:15.404 "results": [ 00:34:15.404 { 00:34:15.404 "job": "NVMe0n1", 00:34:15.404 "core_mask": "0x1", 00:34:15.404 "workload": "verify", 00:34:15.404 "status": "finished", 00:34:15.404 "verify_range": { 00:34:15.404 "start": 0, 00:34:15.404 "length": 16384 00:34:15.404 }, 00:34:15.404 "queue_depth": 128, 00:34:15.404 "io_size": 4096, 00:34:15.404 "runtime": 1.004429, 00:34:15.404 "iops": 12844.113421655487, 00:34:15.404 "mibps": 50.17231805334175, 00:34:15.404 "io_failed": 0, 00:34:15.404 "io_timeout": 0, 00:34:15.404 "avg_latency_us": 9920.396537736093, 00:34:15.404 "min_latency_us": 1078.6133333333332, 00:34:15.404 "max_latency_us": 11195.733333333334 00:34:15.404 } 00:34:15.404 ], 00:34:15.404 "core_count": 1 00:34:15.404 } 00:34:15.404 15:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:15.404 [2024-10-01 15:51:48.030901] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:34:15.404 [2024-10-01 15:51:48.030961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337362 ] 00:34:15.404 [2024-10-01 15:51:48.061724] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:15.404 [2024-10-01 15:51:48.109489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.404 [2024-10-01 15:51:48.136131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.404 [2024-10-01 15:51:50.456524] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:15.404 [2024-10-01 15:51:50.456572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.404 [2024-10-01 15:51:50.456582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.404 [2024-10-01 15:51:50.456589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.404 [2024-10-01 15:51:50.456595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.404 [2024-10-01 15:51:50.456601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.404 [2024-10-01 15:51:50.456606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.404 [2024-10-01 15:51:50.456612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.404 [2024-10-01 15:51:50.456617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.404 [2024-10-01 15:51:50.456623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.404 [2024-10-01 15:51:50.456647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.404 [2024-10-01 15:51:50.456660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc75dc0 (9): Bad file descriptor 00:34:15.404 [2024-10-01 15:51:50.467885] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:15.404 Running I/O for 1 seconds... 00:34:15.404 12750.00 IOPS, 49.80 MiB/s 00:34:15.404 Latency(us) 00:34:15.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.404 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:15.404 Verification LBA range: start 0x0 length 0x4000 00:34:15.404 NVMe0n1 : 1.00 12844.11 50.17 0.00 0.00 9920.40 1078.61 11195.73 00:34:15.404 =================================================================================================================== 00:34:15.404 Total : 12844.11 50.17 0.00 0.00 9920.40 1078.61 11195.73 00:34:15.404 15:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:15.404 15:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:15.665 15:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:15.926 15:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:15.926 15:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:15.926 15:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:16.186 15:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3337362 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3337362 ']' 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3337362 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3337362 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3337362' 00:34:19.485 killing process with pid 3337362 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3337362 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3337362 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:19.485 15:51:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.745 rmmod nvme_tcp 00:34:19.745 rmmod nvme_fabrics 00:34:19.745 rmmod nvme_keyring 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 3333970 ']' 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 3333970 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3333970 ']' 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3333970 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:19.745 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3333970 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3333970' 00:34:20.005 killing process with pid 3333970 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3333970 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3333970 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.005 15:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.553 00:34:22.553 real 0m40.060s 00:34:22.553 user 2m1.746s 00:34:22.553 sys 0m8.980s 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:22.553 ************************************ 00:34:22.553 END TEST nvmf_failover 00:34:22.553 ************************************ 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.553 ************************************ 00:34:22.553 START TEST nvmf_host_discovery 00:34:22.553 ************************************ 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:22.553 * Looking for test storage... 00:34:22.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.553 --rc genhtml_branch_coverage=1 00:34:22.553 --rc genhtml_function_coverage=1 00:34:22.553 --rc genhtml_legend=1 00:34:22.553 --rc geninfo_all_blocks=1 00:34:22.553 --rc geninfo_unexecuted_blocks=1 00:34:22.553 00:34:22.553 ' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.553 --rc genhtml_branch_coverage=1 00:34:22.553 --rc genhtml_function_coverage=1 00:34:22.553 --rc genhtml_legend=1 00:34:22.553 --rc geninfo_all_blocks=1 00:34:22.553 --rc geninfo_unexecuted_blocks=1 00:34:22.553 00:34:22.553 ' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.553 --rc genhtml_branch_coverage=1 00:34:22.553 --rc genhtml_function_coverage=1 00:34:22.553 --rc genhtml_legend=1 00:34:22.553 --rc geninfo_all_blocks=1 00:34:22.553 --rc geninfo_unexecuted_blocks=1 00:34:22.553 00:34:22.553 ' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.553 --rc genhtml_branch_coverage=1 00:34:22.553 --rc genhtml_function_coverage=1 00:34:22.553 --rc genhtml_legend=1 00:34:22.553 --rc geninfo_all_blocks=1 00:34:22.553 --rc geninfo_unexecuted_blocks=1 00:34:22.553 00:34:22.553 ' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.553 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.554 15:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:30.697 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:30.698 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:30.698 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:30.698 Found net devices under 0000:31:00.0: cvl_0_0 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:30.698 Found net devices under 0000:31:00.1: cvl_0_1 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:34:30.698 00:34:30.698 --- 10.0.0.2 ping statistics --- 00:34:30.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.698 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:34:30.698 00:34:30.698 --- 10.0.0.1 ping statistics --- 00:34:30.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.698 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:30.698 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=3343786 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 3343786 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3343786 ']' 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:30.699 15:52:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.699 [2024-10-01 15:52:09.517415] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:34:30.699 [2024-10-01 15:52:09.517482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.699 [2024-10-01 15:52:09.559809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:30.699 [2024-10-01 15:52:09.609382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.699 [2024-10-01 15:52:09.655707] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.699 [2024-10-01 15:52:09.655756] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.699 [2024-10-01 15:52:09.655765] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.699 [2024-10-01 15:52:09.655773] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.699 [2024-10-01 15:52:09.655779] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.699 [2024-10-01 15:52:09.655807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.961 [2024-10-01 15:52:10.386853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.961 [2024-10-01 15:52:10.399143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.961 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:30.962 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.962 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.962 null0 00:34:30.962 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.223 null1 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3343999 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3343999 /tmp/host.sock 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3343999 ']' 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:31.223 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:31.223 15:52:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.223 [2024-10-01 15:52:10.494949] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:34:31.223 [2024-10-01 15:52:10.495016] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343999 ] 00:34:31.223 [2024-10-01 15:52:10.529887] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:31.223 [2024-10-01 15:52:10.580151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.223 [2024-10-01 15:52:10.629707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.164 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.165 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.425 [2024-10-01 15:52:11.670426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.425 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:34:32.426 15:52:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:32.996 [2024-10-01 15:52:12.386095] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:32.996 [2024-10-01 15:52:12.386134] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:32.996 [2024-10-01 15:52:12.386150] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:33.256 [2024-10-01 15:52:12.473392] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:33.256 [2024-10-01 15:52:12.537305] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:33.256 [2024-10-01 15:52:12.537338] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:33.516 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:33.517 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:33.777 15:52:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:33.777 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.778 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.040 [2024-10-01 15:52:13.434881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:34.040 [2024-10-01 15:52:13.435312] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:34.040 [2024-10-01 15:52:13.435341] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.040 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:34.300 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.301 [2024-10-01 15:52:13.563151] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:34.301 15:52:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:34.561 [2024-10-01 15:52:13.868661] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:34.561 [2024-10-01 15:52:13.868680] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:34.561 [2024-10-01 15:52:13.868686] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.501 [2024-10-01 15:52:14.707038] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:35.501 [2024-10-01 15:52:14.707061] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:35.501 [2024-10-01 15:52:14.711276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.501 [2024-10-01 15:52:14.711290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.501 [2024-10-01 15:52:14.711297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.501 [2024-10-01 15:52:14.711303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.501 [2024-10-01 15:52:14.711308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.501 [2024-10-01 15:52:14.711314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.501 [2024-10-01 15:52:14.711319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.501 [2024-10-01 15:52:14.711324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.501 [2024-10-01 15:52:14.711329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.501 [2024-10-01 15:52:14.721292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.501 [2024-10-01 15:52:14.731328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.501 [2024-10-01 15:52:14.731630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.501 [2024-10-01 15:52:14.731641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78990 with addr=10.0.0.2, port=4420 00:34:35.501 [2024-10-01 15:52:14.731647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.501 [2024-10-01 15:52:14.731656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.501 [2024-10-01 15:52:14.731664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.501 [2024-10-01 15:52:14.731669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.501 [2024-10-01 15:52:14.731675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.501 [2024-10-01 15:52:14.731683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.501 [2024-10-01 15:52:14.741375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.501 [2024-10-01 15:52:14.741672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.501 [2024-10-01 15:52:14.741681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78990 with addr=10.0.0.2, port=4420 00:34:35.501 [2024-10-01 15:52:14.741686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.501 [2024-10-01 15:52:14.741694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.501 [2024-10-01 15:52:14.741701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.501 [2024-10-01 15:52:14.741706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.501 [2024-10-01 15:52:14.741711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.501 [2024-10-01 15:52:14.741719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:35.501 [2024-10-01 15:52:14.751419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.501 [2024-10-01 15:52:14.751774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.501 [2024-10-01 15:52:14.751784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78990 with addr=10.0.0.2, port=4420 00:34:35.501 [2024-10-01 15:52:14.751790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.501 [2024-10-01 15:52:14.751798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.501 [2024-10-01 15:52:14.751806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.501 [2024-10-01 15:52:14.751810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.501 [2024-10-01 15:52:14.751815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.501 [2024-10-01 15:52:14.751823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.501 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:35.502 [2024-10-01 15:52:14.761465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.502 [2024-10-01 15:52:14.761641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.502 [2024-10-01 15:52:14.761653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78990 with addr=10.0.0.2, port=4420 00:34:35.502 [2024-10-01 15:52:14.761659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.502 [2024-10-01 15:52:14.761668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.502 [2024-10-01 15:52:14.761675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.502 [2024-10-01 15:52:14.761680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.502 [2024-10-01 15:52:14.761685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.502 [2024-10-01 15:52:14.761693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.502 [2024-10-01 15:52:14.771514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.502 [2024-10-01 15:52:14.771855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.502 [2024-10-01 15:52:14.771864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78990 with addr=10.0.0.2, port=4420 00:34:35.502 [2024-10-01 15:52:14.771870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.502 [2024-10-01 15:52:14.771878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.502 [2024-10-01 15:52:14.771886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.502 [2024-10-01 15:52:14.771898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.502 [2024-10-01 15:52:14.771904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.502 [2024-10-01 15:52:14.771911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.502 [2024-10-01 15:52:14.781559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.502 [2024-10-01 15:52:14.781767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.502 [2024-10-01 15:52:14.781776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78990 with addr=10.0.0.2, port=4420 00:34:35.502 [2024-10-01 15:52:14.781781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.502 [2024-10-01 15:52:14.781789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.502 [2024-10-01 15:52:14.781796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.502 [2024-10-01 15:52:14.781801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.502 [2024-10-01 15:52:14.781806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.502 [2024-10-01 15:52:14.781814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.502 [2024-10-01 15:52:14.791601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.502 [2024-10-01 15:52:14.791907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.502 [2024-10-01 15:52:14.791915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78990 with addr=10.0.0.2, port=4420 00:34:35.502 [2024-10-01 15:52:14.791920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78990 is same with the state(6) to be set 00:34:35.502 [2024-10-01 15:52:14.791928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78990 (9): Bad file descriptor 00:34:35.502 [2024-10-01 15:52:14.791935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.502 [2024-10-01 15:52:14.791939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.502 [2024-10-01 15:52:14.791944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.502 [2024-10-01 15:52:14.791951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.502 [2024-10-01 15:52:14.795558] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:35.502 [2024-10-01 15:52:14.795571] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.502 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:35.762 15:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.762 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.763 15:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.701 [2024-10-01 15:52:16.127011] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:36.701 [2024-10-01 15:52:16.127025] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:36.701 [2024-10-01 15:52:16.127035] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:36.961 [2024-10-01 15:52:16.215288] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:36.961 [2024-10-01 15:52:16.278814] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:36.961 [2024-10-01 15:52:16.278837] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.961 request: 00:34:36.961 { 00:34:36.961 "name": "nvme", 00:34:36.961 "trtype": "tcp", 00:34:36.961 "traddr": "10.0.0.2", 00:34:36.961 "adrfam": "ipv4", 00:34:36.961 "trsvcid": "8009", 00:34:36.961 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:36.961 "wait_for_attach": true, 00:34:36.961 "method": "bdev_nvme_start_discovery", 00:34:36.961 "req_id": 1 00:34:36.961 } 00:34:36.961 Got JSON-RPC error response 00:34:36.961 response: 00:34:36.961 { 00:34:36.961 "code": -17, 00:34:36.961 "message": "File exists" 00:34:36.961 } 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:36.961 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.962 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.222 request: 00:34:37.222 { 00:34:37.222 "name": "nvme_second", 00:34:37.222 "trtype": "tcp", 00:34:37.222 "traddr": "10.0.0.2", 00:34:37.222 "adrfam": "ipv4", 00:34:37.222 "trsvcid": "8009", 00:34:37.222 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:37.222 "wait_for_attach": true, 00:34:37.222 "method": "bdev_nvme_start_discovery", 00:34:37.222 "req_id": 1 00:34:37.222 } 00:34:37.222 Got JSON-RPC error response 00:34:37.222 response: 00:34:37.222 { 00:34:37.222 "code": -17, 00:34:37.222 "message": "File exists" 00:34:37.222 } 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.222 15:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.163 [2024-10-01 15:52:17.538139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.163 [2024-10-01 15:52:17.538163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c81a00 with addr=10.0.0.2, port=8010 00:34:38.163 [2024-10-01 15:52:17.538172] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:38.163 [2024-10-01 15:52:17.538178] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:38.163 [2024-10-01 15:52:17.538182] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:39.104 [2024-10-01 15:52:18.540591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-10-01 15:52:18.540611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c81a00 with addr=10.0.0.2, port=8010 00:34:39.104 [2024-10-01 15:52:18.540620] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:39.104 [2024-10-01 15:52:18.540625] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:39.104 [2024-10-01 15:52:18.540634] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:40.487 [2024-10-01 15:52:19.542595] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:40.487 request: 00:34:40.487 { 00:34:40.487 "name": "nvme_second", 00:34:40.487 "trtype": "tcp", 00:34:40.487 "traddr": "10.0.0.2", 00:34:40.487 "adrfam": "ipv4", 00:34:40.487 "trsvcid": "8010", 00:34:40.487 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:40.487 "wait_for_attach": false, 00:34:40.487 "attach_timeout_ms": 3000, 00:34:40.487 "method": "bdev_nvme_start_discovery", 00:34:40.487 "req_id": 1 00:34:40.487 } 00:34:40.487 Got JSON-RPC error response 00:34:40.487 response: 00:34:40.487 { 00:34:40.487 "code": -110, 00:34:40.487 "message": "Connection timed out" 00:34:40.487 } 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:40.487 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3343999 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:40.488 rmmod nvme_tcp 00:34:40.488 rmmod nvme_fabrics 00:34:40.488 rmmod nvme_keyring 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 3343786 ']' 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 3343786 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3343786 ']' 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3343786 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343786 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343786' 00:34:40.488 killing process with pid 3343786 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3343786 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3343786 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.488 15:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.034 00:34:43.034 real 0m20.404s 00:34:43.034 user 0m23.313s 00:34:43.034 sys 0m7.401s 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.034 ************************************ 00:34:43.034 END TEST nvmf_host_discovery 00:34:43.034 ************************************ 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.034 ************************************ 00:34:43.034 START TEST nvmf_host_multipath_status 00:34:43.034 ************************************ 00:34:43.034 15:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:43.034 * Looking for test storage... 00:34:43.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.034 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:43.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.035 --rc genhtml_branch_coverage=1 00:34:43.035 --rc genhtml_function_coverage=1 00:34:43.035 --rc genhtml_legend=1 00:34:43.035 --rc geninfo_all_blocks=1 00:34:43.035 --rc geninfo_unexecuted_blocks=1 00:34:43.035 00:34:43.035 ' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:43.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.035 --rc genhtml_branch_coverage=1 00:34:43.035 --rc genhtml_function_coverage=1 00:34:43.035 --rc genhtml_legend=1 00:34:43.035 --rc geninfo_all_blocks=1 00:34:43.035 --rc geninfo_unexecuted_blocks=1 00:34:43.035 00:34:43.035 ' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:43.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.035 --rc genhtml_branch_coverage=1 00:34:43.035 --rc genhtml_function_coverage=1 00:34:43.035 --rc genhtml_legend=1 00:34:43.035 --rc geninfo_all_blocks=1 00:34:43.035 --rc geninfo_unexecuted_blocks=1 00:34:43.035 00:34:43.035 ' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:43.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.035 --rc genhtml_branch_coverage=1 00:34:43.035 --rc genhtml_function_coverage=1 00:34:43.035 --rc genhtml_legend=1 00:34:43.035 --rc geninfo_all_blocks=1 00:34:43.035 --rc geninfo_unexecuted_blocks=1 00:34:43.035 00:34:43.035 ' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:43.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:43.035 15:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:51.182 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:51.183 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:51.183 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:51.183 Found net devices under 0000:31:00.0: cvl_0_0 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:51.183 Found net devices under 0000:31:00.1: cvl_0_1 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:51.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:51.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:34:51.183 00:34:51.183 --- 10.0.0.2 ping statistics --- 00:34:51.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.183 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:51.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:51.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:34:51.183 00:34:51.183 --- 10.0.0.1 ping statistics --- 00:34:51.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.183 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:34:51.183 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=3350080 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 3350080 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3350080 ']' 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:51.184 15:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:51.184 [2024-10-01 15:52:30.022085] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:34:51.184 [2024-10-01 15:52:30.022174] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.184 [2024-10-01 15:52:30.066221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:51.184 [2024-10-01 15:52:30.115801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:51.184 [2024-10-01 15:52:30.163760] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:51.184 [2024-10-01 15:52:30.163819] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:51.184 [2024-10-01 15:52:30.163834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:51.184 [2024-10-01 15:52:30.163845] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:51.184 [2024-10-01 15:52:30.163853] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:51.184 [2024-10-01 15:52:30.163965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.184 [2024-10-01 15:52:30.163966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3350080 00:34:51.446 15:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:51.707 [2024-10-01 15:52:31.045404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.707 15:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:51.969 Malloc0 00:34:51.969 15:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:52.233 15:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:52.495 15:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:52.495 [2024-10-01 15:52:31.840833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.495 15:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:52.757 [2024-10-01 15:52:32.025280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3350509 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3350509 /var/tmp/bdevperf.sock 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3350509 ']' 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:52.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:52.757 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:53.700 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.700 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:53.700 15:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:53.700 15:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:54.272 Nvme0n1 00:34:54.272 15:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:54.533 Nvme0n1 00:34:54.533 15:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:54.533 15:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:57.078 15:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:57.078 15:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:57.078 15:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:57.078 15:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:58.017 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:58.017 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:58.017 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.017 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.277 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:58.537 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.537 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:58.537 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.537 15:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.797 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.797 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.797 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.797 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:59.060 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.060 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:59.061 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.061 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:59.061 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.061 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:59.061 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:59.321 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:59.581 15:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:00.524 15:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:00.524 15:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:00.524 15:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.524 15:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.785 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:01.045 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.045 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:01.045 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.045 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.305 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:01.565 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.565 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:01.565 15:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:01.825 15:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:01.825 15:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.208 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:03.469 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.469 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:03.469 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.469 15:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.729 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.989 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.989 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:03.989 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:04.250 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:04.510 15:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:05.450 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:05.450 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:05.450 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.450 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:05.711 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.711 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:05.711 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.711 15:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.711 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:05.711 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.711 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.711 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:06.003 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.004 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:06.004 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.004 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.264 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.523 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.523 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:06.523 15:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:06.783 15:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:06.783 15:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.165 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.425 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.425 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.425 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.425 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.686 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.686 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:08.686 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.686 15:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.686 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.686 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:08.686 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.686 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.947 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.947 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:08.947 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:09.207 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:09.207 15:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.591 15:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:10.591 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.591 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:10.591 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:10.591 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.852 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.852 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:10.852 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.852 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:11.114 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.114 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:11.114 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.114 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:11.374 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.374 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:11.374 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.374 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:11.374 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.374 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:11.635 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:11.635 15:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:11.897 15:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:11.897 15:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.283 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:13.544 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.544 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:13.544 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.544 15:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.804 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:14.065 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.065 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:14.065 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:14.326 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:14.326 15:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.710 15:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:15.710 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.710 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:15.710 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.710 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:15.971 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.971 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:15.971 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.971 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:16.232 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.232 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:16.232 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.232 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:16.494 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.494 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:16.494 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.494 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:16.494 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.494 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:16.494 15:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:16.755 15:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:17.014 15:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:17.955 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:17.956 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:17.956 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.956 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:17.956 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.956 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:17.956 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.956 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:18.217 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.217 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:18.217 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.217 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:18.477 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.477 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:18.477 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.477 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:18.737 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.737 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:18.737 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.737 15:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:18.737 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.737 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:18.737 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:18.737 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.997 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.997 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:18.997 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:19.258 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:19.258 15:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:20.641 15:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.641 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:20.641 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:20.641 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.642 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:20.903 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.903 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:20.903 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.903 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:21.163 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.163 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:21.163 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.163 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3350509 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3350509 ']' 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3350509 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:21.424 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3350509 00:35:21.689 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:35:21.689 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:35:21.689 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3350509' 00:35:21.689 killing process with pid 3350509 00:35:21.689 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3350509 00:35:21.689 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3350509 00:35:21.689 { 00:35:21.689 "results": [ 00:35:21.689 { 00:35:21.689 "job": "Nvme0n1", 00:35:21.689 "core_mask": "0x4", 00:35:21.689 "workload": "verify", 00:35:21.689 "status": "terminated", 00:35:21.689 "verify_range": { 00:35:21.689 "start": 0, 00:35:21.689 "length": 16384 00:35:21.689 }, 00:35:21.689 "queue_depth": 128, 00:35:21.689 "io_size": 4096, 00:35:21.689 "runtime": 26.802749, 00:35:21.689 "iops": 11896.503601179118, 00:35:21.689 "mibps": 46.47071719210593, 00:35:21.689 "io_failed": 0, 00:35:21.689 "io_timeout": 0, 00:35:21.689 "avg_latency_us": 10740.380028873786, 00:35:21.689 "min_latency_us": 319.14666666666665, 00:35:21.689 "max_latency_us": 3019898.88 00:35:21.689 } 00:35:21.689 ], 00:35:21.689 "core_count": 1 00:35:21.689 } 00:35:21.689 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3350509 00:35:21.689 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:21.689 [2024-10-01 15:52:32.121676] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:35:21.689 [2024-10-01 15:52:32.121760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350509 ] 00:35:21.689 [2024-10-01 15:52:32.156322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:21.689 [2024-10-01 15:52:32.207320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.689 [2024-10-01 15:52:32.253174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:21.689 [2024-10-01 15:52:33.865211] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:35:21.689 Running I/O for 90 seconds... 00:35:21.689 10958.00 IOPS, 42.80 MiB/s 11071.50 IOPS, 43.25 MiB/s 11115.67 IOPS, 43.42 MiB/s 11095.00 IOPS, 43.34 MiB/s 11348.60 IOPS, 44.33 MiB/s 11645.67 IOPS, 45.49 MiB/s 11833.43 IOPS, 46.22 MiB/s 11954.00 IOPS, 46.70 MiB/s 12059.78 IOPS, 47.11 MiB/s 12155.60 IOPS, 47.48 MiB/s 12220.64 IOPS, 47.74 MiB/s [2024-10-01 15:52:46.002585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.002648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.002666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.002682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.002698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.002714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.002729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.002745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.002751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:21.689 [2024-10-01 15:52:46.003306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.689 [2024-10-01 15:52:46.003311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.690 [2024-10-01 15:52:46.003359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.690 [2024-10-01 15:52:46.003377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.690 [2024-10-01 15:52:46.003393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.690 [2024-10-01 15:52:46.003410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.690 [2024-10-01 15:52:46.003427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.690 [2024-10-01 15:52:46.003443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.690 [2024-10-01 15:52:46.003460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.003983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.003988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.690 [2024-10-01 15:52:46.004000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.690 [2024-10-01 15:52:46.004004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.691 [2024-10-01 15:52:46.004409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:21.691 [2024-10-01 15:52:46.004800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.691 [2024-10-01 15:52:46.004805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.004825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.004845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.004864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.004883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.004908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.004928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.004947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.004967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.004981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.004987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.005075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.005097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.005117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.005138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.005160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.005182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:46.005204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:46.005368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:46.005373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:21.692 12217.17 IOPS, 47.72 MiB/s 11277.38 IOPS, 44.05 MiB/s 10471.86 IOPS, 40.91 MiB/s 9825.07 IOPS, 38.38 MiB/s 10004.25 IOPS, 39.08 MiB/s 10169.76 IOPS, 39.73 MiB/s 10528.78 IOPS, 41.13 MiB/s 10869.47 IOPS, 42.46 MiB/s 11082.75 IOPS, 43.29 MiB/s 11161.14 IOPS, 43.60 MiB/s 11236.27 IOPS, 43.89 MiB/s 11460.17 IOPS, 44.77 MiB/s 11690.58 IOPS, 45.67 MiB/s [2024-10-01 15:52:58.663085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:58.663122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.663152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:58.663159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.663309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:58.663319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:58.664371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.692 [2024-10-01 15:52:58.664496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:58.664512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:58.664528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:58.664543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:58.664559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:58.664575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:58.664590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.692 [2024-10-01 15:52:58.664605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:21.692 [2024-10-01 15:52:58.664616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.664621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.664631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.664637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.664647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.693 [2024-10-01 15:52:58.664656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.693 [2024-10-01 15:52:58.665834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:21.693 [2024-10-01 15:52:58.665844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.694 [2024-10-01 15:52:58.665850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:21.694 11831.40 IOPS, 46.22 MiB/s 11871.96 IOPS, 46.37 MiB/s Received shutdown signal, test time was about 26.803357 seconds 00:35:21.694 00:35:21.694 Latency(us) 00:35:21.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.694 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:21.694 Verification LBA range: start 0x0 length 0x4000 00:35:21.694 Nvme0n1 : 26.80 11896.50 46.47 0.00 0.00 10740.38 319.15 3019898.88 00:35:21.694 =================================================================================================================== 00:35:21.694 Total : 11896.50 46.47 0.00 0.00 10740.38 319.15 3019898.88 00:35:21.694 [2024-10-01 15:53:00.889090] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:35:21.694 15:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.955 rmmod nvme_tcp 00:35:21.955 rmmod nvme_fabrics 00:35:21.955 rmmod nvme_keyring 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 3350080 ']' 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 3350080 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3350080 ']' 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3350080 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3350080 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3350080' 00:35:21.955 killing process with pid 3350080 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3350080 00:35:21.955 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3350080 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.217 15:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.128 15:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.128 00:35:24.128 real 0m41.512s 00:35:24.128 user 1m46.746s 00:35:24.128 sys 0m11.705s 00:35:24.128 15:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:24.128 15:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:24.128 ************************************ 00:35:24.128 END TEST nvmf_host_multipath_status 00:35:24.128 ************************************ 00:35:24.128 15:53:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:24.128 15:53:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:24.128 15:53:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:24.128 15:53:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.390 ************************************ 00:35:24.390 START TEST nvmf_discovery_remove_ifc 00:35:24.390 ************************************ 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:24.390 * Looking for test storage... 00:35:24.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:24.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.390 --rc genhtml_branch_coverage=1 00:35:24.390 --rc genhtml_function_coverage=1 00:35:24.390 --rc genhtml_legend=1 00:35:24.390 --rc geninfo_all_blocks=1 00:35:24.390 --rc geninfo_unexecuted_blocks=1 00:35:24.390 00:35:24.390 ' 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:24.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.390 --rc genhtml_branch_coverage=1 00:35:24.390 --rc genhtml_function_coverage=1 00:35:24.390 --rc genhtml_legend=1 00:35:24.390 --rc geninfo_all_blocks=1 00:35:24.390 --rc geninfo_unexecuted_blocks=1 00:35:24.390 00:35:24.390 ' 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:24.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.390 --rc genhtml_branch_coverage=1 00:35:24.390 --rc genhtml_function_coverage=1 00:35:24.390 --rc genhtml_legend=1 00:35:24.390 --rc geninfo_all_blocks=1 00:35:24.390 --rc geninfo_unexecuted_blocks=1 00:35:24.390 00:35:24.390 ' 00:35:24.390 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:24.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.390 --rc genhtml_branch_coverage=1 00:35:24.390 --rc genhtml_function_coverage=1 00:35:24.390 --rc genhtml_legend=1 00:35:24.390 --rc geninfo_all_blocks=1 00:35:24.390 --rc geninfo_unexecuted_blocks=1 00:35:24.390 00:35:24.390 ' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.391 15:53:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:32.705 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:32.705 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:32.705 Found net devices under 0000:31:00.0: cvl_0_0 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:32.705 Found net devices under 0000:31:00.1: cvl_0_1 00:35:32.705 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:32.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:35:32.706 00:35:32.706 --- 10.0.0.2 ping statistics --- 00:35:32.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.706 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:35:32.706 00:35:32.706 --- 10.0.0.1 ping statistics --- 00:35:32.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.706 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=3360401 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 3360401 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3360401 ']' 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:32.706 15:53:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.706 [2024-10-01 15:53:11.638459] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:35:32.706 [2024-10-01 15:53:11.638528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.706 [2024-10-01 15:53:11.680365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:32.706 [2024-10-01 15:53:11.730236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.706 [2024-10-01 15:53:11.775788] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.706 [2024-10-01 15:53:11.775847] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.706 [2024-10-01 15:53:11.775855] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.706 [2024-10-01 15:53:11.775862] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.706 [2024-10-01 15:53:11.775868] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.706 [2024-10-01 15:53:11.775902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.276 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.276 [2024-10-01 15:53:12.526710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.277 [2024-10-01 15:53:12.535053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:33.277 null0 00:35:33.277 [2024-10-01 15:53:12.566923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3360741 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3360741 /tmp/host.sock 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3360741 ']' 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:33.277 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:33.277 15:53:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.277 [2024-10-01 15:53:12.643909] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:35:33.277 [2024-10-01 15:53:12.643985] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360741 ] 00:35:33.277 [2024-10-01 15:53:12.681075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:33.277 [2024-10-01 15:53:12.731125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.537 [2024-10-01 15:53:12.778487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.109 15:53:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.493 [2024-10-01 15:53:14.615797] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:35.493 [2024-10-01 15:53:14.615823] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:35.493 [2024-10-01 15:53:14.615838] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:35.493 [2024-10-01 15:53:14.744219] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:35.753 [2024-10-01 15:53:14.970067] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:35.753 [2024-10-01 15:53:14.970120] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:35.753 [2024-10-01 15:53:14.970143] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:35.753 [2024-10-01 15:53:14.970157] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:35.753 [2024-10-01 15:53:14.970179] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:35.753 15:53:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.753 [2024-10-01 15:53:15.014531] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdf5390 was disconnected and freed. delete nvme_qpair. 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.753 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.754 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.754 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:35.754 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.754 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:35.754 15:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:37.136 15:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:38.075 15:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:39.016 15:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:39.957 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.220 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:40.220 15:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:41.160 [2024-10-01 15:53:20.410787] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:41.160 [2024-10-01 15:53:20.410839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.160 [2024-10-01 15:53:20.410849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.160 [2024-10-01 15:53:20.410858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.160 [2024-10-01 15:53:20.410863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.160 [2024-10-01 15:53:20.410869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.160 [2024-10-01 15:53:20.410874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.160 [2024-10-01 15:53:20.410880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.160 [2024-10-01 15:53:20.410885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.160 [2024-10-01 15:53:20.410898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.160 [2024-10-01 15:53:20.410904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.160 [2024-10-01 15:53:20.410909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd1c40 is same with the state(6) to be set 00:35:41.160 [2024-10-01 15:53:20.420807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd1c40 (9): Bad file descriptor 00:35:41.160 15:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:41.160 [2024-10-01 15:53:20.430845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:41.160 15:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:41.160 15:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:41.160 15:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.160 15:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:41.160 15:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.160 15:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:42.099 [2024-10-01 15:53:21.451986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:42.099 [2024-10-01 15:53:21.452087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd1c40 with addr=10.0.0.2, port=4420 00:35:42.099 [2024-10-01 15:53:21.452123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd1c40 is same with the state(6) to be set 00:35:42.099 [2024-10-01 15:53:21.452187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd1c40 (9): Bad file descriptor 00:35:42.099 [2024-10-01 15:53:21.453353] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:42.099 [2024-10-01 15:53:21.453429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:42.099 [2024-10-01 15:53:21.453452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:42.099 [2024-10-01 15:53:21.453475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:42.099 [2024-10-01 15:53:21.453542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.099 [2024-10-01 15:53:21.453567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:42.099 15:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.099 15:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:42.099 15:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:43.039 [2024-10-01 15:53:22.455967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:43.039 [2024-10-01 15:53:22.455988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:43.039 [2024-10-01 15:53:22.455996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:43.039 [2024-10-01 15:53:22.456002] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:43.039 [2024-10-01 15:53:22.456012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.039 [2024-10-01 15:53:22.456030] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:43.039 [2024-10-01 15:53:22.456054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.039 [2024-10-01 15:53:22.456063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.039 [2024-10-01 15:53:22.456072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.039 [2024-10-01 15:53:22.456078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.039 [2024-10-01 15:53:22.456083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.039 [2024-10-01 15:53:22.456089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.039 [2024-10-01 15:53:22.456095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.039 [2024-10-01 15:53:22.456100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.039 [2024-10-01 15:53:22.456106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.039 [2024-10-01 15:53:22.456112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.039 [2024-10-01 15:53:22.456117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:43.039 [2024-10-01 15:53:22.456510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc1380 (9): Bad file descriptor 00:35:43.039 [2024-10-01 15:53:22.457522] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:43.039 [2024-10-01 15:53:22.457531] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:43.039 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:43.039 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.039 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:43.039 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.040 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:43.040 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.040 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:43.300 15:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:44.243 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.504 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:44.504 15:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:45.075 [2024-10-01 15:53:24.470548] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:45.075 [2024-10-01 15:53:24.470565] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:45.075 [2024-10-01 15:53:24.470575] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:45.336 [2024-10-01 15:53:24.600955] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:45.336 [2024-10-01 15:53:24.700415] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:45.336 [2024-10-01 15:53:24.700449] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:45.336 [2024-10-01 15:53:24.700465] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:45.336 [2024-10-01 15:53:24.700477] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:45.336 [2024-10-01 15:53:24.700483] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:45.336 [2024-10-01 15:53:24.749488] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdd2ee0 was disconnected and freed. delete nvme_qpair. 00:35:45.336 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3360741 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3360741 ']' 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3360741 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360741 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360741' 00:35:45.597 killing process with pid 3360741 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3360741 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3360741 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:45.597 15:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:45.597 rmmod nvme_tcp 00:35:45.597 rmmod nvme_fabrics 00:35:45.597 rmmod nvme_keyring 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 3360401 ']' 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 3360401 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3360401 ']' 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3360401 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:45.597 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360401 00:35:45.858 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:45.858 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:45.858 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360401' 00:35:45.858 killing process with pid 3360401 00:35:45.858 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3360401 00:35:45.858 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3360401 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.859 15:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:48.406 00:35:48.406 real 0m23.692s 00:35:48.406 user 0m27.508s 00:35:48.406 sys 0m7.333s 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.406 ************************************ 00:35:48.406 END TEST nvmf_discovery_remove_ifc 00:35:48.406 ************************************ 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.406 ************************************ 00:35:48.406 START TEST nvmf_identify_kernel_target 00:35:48.406 ************************************ 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:48.406 * Looking for test storage... 00:35:48.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:48.406 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.407 --rc genhtml_branch_coverage=1 00:35:48.407 --rc genhtml_function_coverage=1 00:35:48.407 --rc genhtml_legend=1 00:35:48.407 --rc geninfo_all_blocks=1 00:35:48.407 --rc geninfo_unexecuted_blocks=1 00:35:48.407 00:35:48.407 ' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.407 --rc genhtml_branch_coverage=1 00:35:48.407 --rc genhtml_function_coverage=1 00:35:48.407 --rc genhtml_legend=1 00:35:48.407 --rc geninfo_all_blocks=1 00:35:48.407 --rc geninfo_unexecuted_blocks=1 00:35:48.407 00:35:48.407 ' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.407 --rc genhtml_branch_coverage=1 00:35:48.407 --rc genhtml_function_coverage=1 00:35:48.407 --rc genhtml_legend=1 00:35:48.407 --rc geninfo_all_blocks=1 00:35:48.407 --rc geninfo_unexecuted_blocks=1 00:35:48.407 00:35:48.407 ' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.407 --rc genhtml_branch_coverage=1 00:35:48.407 --rc genhtml_function_coverage=1 00:35:48.407 --rc genhtml_legend=1 00:35:48.407 --rc geninfo_all_blocks=1 00:35:48.407 --rc geninfo_unexecuted_blocks=1 00:35:48.407 00:35:48.407 ' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:48.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:48.407 15:53:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:56.555 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:56.556 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:56.556 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:56.556 Found net devices under 0000:31:00.0: cvl_0_0 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:56.556 Found net devices under 0000:31:00.1: cvl_0_1 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:56.556 15:53:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:56.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:56.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:35:56.556 00:35:56.556 --- 10.0.0.2 ping statistics --- 00:35:56.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.556 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:56.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:56.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:35:56.556 00:35:56.556 --- 10.0.0.1 ping statistics --- 00:35:56.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.556 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.556 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:56.557 15:53:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:59.859 Waiting for block devices as requested 00:35:59.859 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:59.859 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:59.859 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:59.859 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:59.859 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:59.859 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:59.859 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:00.120 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:00.120 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:00.381 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:00.381 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:00.381 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:00.642 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:00.642 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:00.642 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:00.903 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:00.903 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:01.164 No valid GPT data, bailing 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:01.164 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:01.426 00:36:01.426 Discovery Log Number of Records 2, Generation counter 2 00:36:01.426 =====Discovery Log Entry 0====== 00:36:01.426 trtype: tcp 00:36:01.426 adrfam: ipv4 00:36:01.426 subtype: current discovery subsystem 00:36:01.426 treq: not specified, sq flow control disable supported 00:36:01.426 portid: 1 00:36:01.426 trsvcid: 4420 00:36:01.426 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:01.426 traddr: 10.0.0.1 00:36:01.426 eflags: none 00:36:01.426 sectype: none 00:36:01.426 =====Discovery Log Entry 1====== 00:36:01.426 trtype: tcp 00:36:01.426 adrfam: ipv4 00:36:01.426 subtype: nvme subsystem 00:36:01.426 treq: not specified, sq flow control disable supported 00:36:01.426 portid: 1 00:36:01.426 trsvcid: 4420 00:36:01.426 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:01.426 traddr: 10.0.0.1 00:36:01.426 eflags: none 00:36:01.426 sectype: none 00:36:01.426 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:01.426 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:01.426 ===================================================== 00:36:01.426 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:01.426 ===================================================== 00:36:01.426 Controller Capabilities/Features 00:36:01.426 ================================ 00:36:01.426 Vendor ID: 0000 00:36:01.426 Subsystem Vendor ID: 0000 00:36:01.426 Serial Number: 43e07dfc7e8d34172c07 00:36:01.426 Model Number: Linux 00:36:01.426 Firmware Version: 6.8.9-20 00:36:01.426 Recommended Arb Burst: 0 00:36:01.426 IEEE OUI Identifier: 00 00 00 00:36:01.426 Multi-path I/O 00:36:01.426 May have multiple subsystem ports: No 00:36:01.426 May have multiple controllers: No 00:36:01.426 Associated with SR-IOV VF: No 00:36:01.426 Max Data Transfer Size: Unlimited 00:36:01.426 Max Number of Namespaces: 0 00:36:01.426 Max Number of I/O Queues: 1024 00:36:01.426 NVMe Specification Version (VS): 1.3 00:36:01.426 NVMe Specification Version (Identify): 1.3 00:36:01.426 Maximum Queue Entries: 1024 00:36:01.426 Contiguous Queues Required: No 00:36:01.426 Arbitration Mechanisms Supported 00:36:01.426 Weighted Round Robin: Not Supported 00:36:01.426 Vendor Specific: Not Supported 00:36:01.426 Reset Timeout: 7500 ms 00:36:01.426 Doorbell Stride: 4 bytes 00:36:01.427 NVM Subsystem Reset: Not Supported 00:36:01.427 Command Sets Supported 00:36:01.427 NVM Command Set: Supported 00:36:01.427 Boot Partition: Not Supported 00:36:01.427 Memory Page Size Minimum: 4096 bytes 00:36:01.427 Memory Page Size Maximum: 4096 bytes 00:36:01.427 Persistent Memory Region: Not Supported 00:36:01.427 Optional Asynchronous Events Supported 00:36:01.427 Namespace Attribute Notices: Not Supported 00:36:01.427 Firmware Activation Notices: Not Supported 00:36:01.427 ANA Change Notices: Not Supported 00:36:01.427 PLE Aggregate Log Change Notices: Not Supported 00:36:01.427 LBA Status Info Alert Notices: Not Supported 00:36:01.427 EGE Aggregate Log Change Notices: Not Supported 00:36:01.427 Normal NVM Subsystem Shutdown event: Not Supported 00:36:01.427 Zone Descriptor Change Notices: Not Supported 00:36:01.427 Discovery Log Change Notices: Supported 00:36:01.427 Controller Attributes 00:36:01.427 128-bit Host Identifier: Not Supported 00:36:01.427 Non-Operational Permissive Mode: Not Supported 00:36:01.427 NVM Sets: Not Supported 00:36:01.427 Read Recovery Levels: Not Supported 00:36:01.427 Endurance Groups: Not Supported 00:36:01.427 Predictable Latency Mode: Not Supported 00:36:01.427 Traffic Based Keep ALive: Not Supported 00:36:01.427 Namespace Granularity: Not Supported 00:36:01.427 SQ Associations: Not Supported 00:36:01.427 UUID List: Not Supported 00:36:01.427 Multi-Domain Subsystem: Not Supported 00:36:01.427 Fixed Capacity Management: Not Supported 00:36:01.427 Variable Capacity Management: Not Supported 00:36:01.427 Delete Endurance Group: Not Supported 00:36:01.427 Delete NVM Set: Not Supported 00:36:01.427 Extended LBA Formats Supported: Not Supported 00:36:01.427 Flexible Data Placement Supported: Not Supported 00:36:01.427 00:36:01.427 Controller Memory Buffer Support 00:36:01.427 ================================ 00:36:01.427 Supported: No 00:36:01.427 00:36:01.427 Persistent Memory Region Support 00:36:01.427 ================================ 00:36:01.427 Supported: No 00:36:01.427 00:36:01.427 Admin Command Set Attributes 00:36:01.427 ============================ 00:36:01.427 Security Send/Receive: Not Supported 00:36:01.427 Format NVM: Not Supported 00:36:01.427 Firmware Activate/Download: Not Supported 00:36:01.427 Namespace Management: Not Supported 00:36:01.427 Device Self-Test: Not Supported 00:36:01.427 Directives: Not Supported 00:36:01.427 NVMe-MI: Not Supported 00:36:01.427 Virtualization Management: Not Supported 00:36:01.427 Doorbell Buffer Config: Not Supported 00:36:01.427 Get LBA Status Capability: Not Supported 00:36:01.427 Command & Feature Lockdown Capability: Not Supported 00:36:01.427 Abort Command Limit: 1 00:36:01.427 Async Event Request Limit: 1 00:36:01.427 Number of Firmware Slots: N/A 00:36:01.427 Firmware Slot 1 Read-Only: N/A 00:36:01.427 Firmware Activation Without Reset: N/A 00:36:01.427 Multiple Update Detection Support: N/A 00:36:01.427 Firmware Update Granularity: No Information Provided 00:36:01.427 Per-Namespace SMART Log: No 00:36:01.427 Asymmetric Namespace Access Log Page: Not Supported 00:36:01.427 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:01.427 Command Effects Log Page: Not Supported 00:36:01.427 Get Log Page Extended Data: Supported 00:36:01.427 Telemetry Log Pages: Not Supported 00:36:01.427 Persistent Event Log Pages: Not Supported 00:36:01.427 Supported Log Pages Log Page: May Support 00:36:01.427 Commands Supported & Effects Log Page: Not Supported 00:36:01.427 Feature Identifiers & Effects Log Page:May Support 00:36:01.427 NVMe-MI Commands & Effects Log Page: May Support 00:36:01.427 Data Area 4 for Telemetry Log: Not Supported 00:36:01.427 Error Log Page Entries Supported: 1 00:36:01.427 Keep Alive: Not Supported 00:36:01.427 00:36:01.427 NVM Command Set Attributes 00:36:01.427 ========================== 00:36:01.427 Submission Queue Entry Size 00:36:01.427 Max: 1 00:36:01.427 Min: 1 00:36:01.427 Completion Queue Entry Size 00:36:01.427 Max: 1 00:36:01.427 Min: 1 00:36:01.427 Number of Namespaces: 0 00:36:01.427 Compare Command: Not Supported 00:36:01.427 Write Uncorrectable Command: Not Supported 00:36:01.427 Dataset Management Command: Not Supported 00:36:01.427 Write Zeroes Command: Not Supported 00:36:01.427 Set Features Save Field: Not Supported 00:36:01.427 Reservations: Not Supported 00:36:01.427 Timestamp: Not Supported 00:36:01.427 Copy: Not Supported 00:36:01.427 Volatile Write Cache: Not Present 00:36:01.427 Atomic Write Unit (Normal): 1 00:36:01.427 Atomic Write Unit (PFail): 1 00:36:01.427 Atomic Compare & Write Unit: 1 00:36:01.427 Fused Compare & Write: Not Supported 00:36:01.427 Scatter-Gather List 00:36:01.427 SGL Command Set: Supported 00:36:01.427 SGL Keyed: Not Supported 00:36:01.427 SGL Bit Bucket Descriptor: Not Supported 00:36:01.427 SGL Metadata Pointer: Not Supported 00:36:01.427 Oversized SGL: Not Supported 00:36:01.427 SGL Metadata Address: Not Supported 00:36:01.427 SGL Offset: Supported 00:36:01.427 Transport SGL Data Block: Not Supported 00:36:01.427 Replay Protected Memory Block: Not Supported 00:36:01.427 00:36:01.427 Firmware Slot Information 00:36:01.427 ========================= 00:36:01.427 Active slot: 0 00:36:01.427 00:36:01.427 00:36:01.427 Error Log 00:36:01.427 ========= 00:36:01.427 00:36:01.427 Active Namespaces 00:36:01.427 ================= 00:36:01.427 Discovery Log Page 00:36:01.427 ================== 00:36:01.427 Generation Counter: 2 00:36:01.427 Number of Records: 2 00:36:01.427 Record Format: 0 00:36:01.427 00:36:01.427 Discovery Log Entry 0 00:36:01.427 ---------------------- 00:36:01.427 Transport Type: 3 (TCP) 00:36:01.427 Address Family: 1 (IPv4) 00:36:01.427 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:01.427 Entry Flags: 00:36:01.427 Duplicate Returned Information: 0 00:36:01.427 Explicit Persistent Connection Support for Discovery: 0 00:36:01.427 Transport Requirements: 00:36:01.427 Secure Channel: Not Specified 00:36:01.427 Port ID: 1 (0x0001) 00:36:01.427 Controller ID: 65535 (0xffff) 00:36:01.427 Admin Max SQ Size: 32 00:36:01.427 Transport Service Identifier: 4420 00:36:01.427 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:01.427 Transport Address: 10.0.0.1 00:36:01.427 Discovery Log Entry 1 00:36:01.427 ---------------------- 00:36:01.427 Transport Type: 3 (TCP) 00:36:01.427 Address Family: 1 (IPv4) 00:36:01.427 Subsystem Type: 2 (NVM Subsystem) 00:36:01.427 Entry Flags: 00:36:01.427 Duplicate Returned Information: 0 00:36:01.427 Explicit Persistent Connection Support for Discovery: 0 00:36:01.427 Transport Requirements: 00:36:01.427 Secure Channel: Not Specified 00:36:01.427 Port ID: 1 (0x0001) 00:36:01.427 Controller ID: 65535 (0xffff) 00:36:01.427 Admin Max SQ Size: 32 00:36:01.427 Transport Service Identifier: 4420 00:36:01.427 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:01.427 Transport Address: 10.0.0.1 00:36:01.427 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:01.427 get_feature(0x01) failed 00:36:01.427 get_feature(0x02) failed 00:36:01.427 get_feature(0x04) failed 00:36:01.427 ===================================================== 00:36:01.427 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:01.427 ===================================================== 00:36:01.427 Controller Capabilities/Features 00:36:01.427 ================================ 00:36:01.427 Vendor ID: 0000 00:36:01.427 Subsystem Vendor ID: 0000 00:36:01.427 Serial Number: 987cd61327d99dea2242 00:36:01.427 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:01.427 Firmware Version: 6.8.9-20 00:36:01.427 Recommended Arb Burst: 6 00:36:01.427 IEEE OUI Identifier: 00 00 00 00:36:01.427 Multi-path I/O 00:36:01.427 May have multiple subsystem ports: Yes 00:36:01.427 May have multiple controllers: Yes 00:36:01.427 Associated with SR-IOV VF: No 00:36:01.427 Max Data Transfer Size: Unlimited 00:36:01.427 Max Number of Namespaces: 1024 00:36:01.427 Max Number of I/O Queues: 128 00:36:01.427 NVMe Specification Version (VS): 1.3 00:36:01.427 NVMe Specification Version (Identify): 1.3 00:36:01.427 Maximum Queue Entries: 1024 00:36:01.427 Contiguous Queues Required: No 00:36:01.427 Arbitration Mechanisms Supported 00:36:01.427 Weighted Round Robin: Not Supported 00:36:01.427 Vendor Specific: Not Supported 00:36:01.427 Reset Timeout: 7500 ms 00:36:01.427 Doorbell Stride: 4 bytes 00:36:01.427 NVM Subsystem Reset: Not Supported 00:36:01.427 Command Sets Supported 00:36:01.427 NVM Command Set: Supported 00:36:01.427 Boot Partition: Not Supported 00:36:01.427 Memory Page Size Minimum: 4096 bytes 00:36:01.427 Memory Page Size Maximum: 4096 bytes 00:36:01.428 Persistent Memory Region: Not Supported 00:36:01.428 Optional Asynchronous Events Supported 00:36:01.428 Namespace Attribute Notices: Supported 00:36:01.428 Firmware Activation Notices: Not Supported 00:36:01.428 ANA Change Notices: Supported 00:36:01.428 PLE Aggregate Log Change Notices: Not Supported 00:36:01.428 LBA Status Info Alert Notices: Not Supported 00:36:01.428 EGE Aggregate Log Change Notices: Not Supported 00:36:01.428 Normal NVM Subsystem Shutdown event: Not Supported 00:36:01.428 Zone Descriptor Change Notices: Not Supported 00:36:01.428 Discovery Log Change Notices: Not Supported 00:36:01.428 Controller Attributes 00:36:01.428 128-bit Host Identifier: Supported 00:36:01.428 Non-Operational Permissive Mode: Not Supported 00:36:01.428 NVM Sets: Not Supported 00:36:01.428 Read Recovery Levels: Not Supported 00:36:01.428 Endurance Groups: Not Supported 00:36:01.428 Predictable Latency Mode: Not Supported 00:36:01.428 Traffic Based Keep ALive: Supported 00:36:01.428 Namespace Granularity: Not Supported 00:36:01.428 SQ Associations: Not Supported 00:36:01.428 UUID List: Not Supported 00:36:01.428 Multi-Domain Subsystem: Not Supported 00:36:01.428 Fixed Capacity Management: Not Supported 00:36:01.428 Variable Capacity Management: Not Supported 00:36:01.428 Delete Endurance Group: Not Supported 00:36:01.428 Delete NVM Set: Not Supported 00:36:01.428 Extended LBA Formats Supported: Not Supported 00:36:01.428 Flexible Data Placement Supported: Not Supported 00:36:01.428 00:36:01.428 Controller Memory Buffer Support 00:36:01.428 ================================ 00:36:01.428 Supported: No 00:36:01.428 00:36:01.428 Persistent Memory Region Support 00:36:01.428 ================================ 00:36:01.428 Supported: No 00:36:01.428 00:36:01.428 Admin Command Set Attributes 00:36:01.428 ============================ 00:36:01.428 Security Send/Receive: Not Supported 00:36:01.428 Format NVM: Not Supported 00:36:01.428 Firmware Activate/Download: Not Supported 00:36:01.428 Namespace Management: Not Supported 00:36:01.428 Device Self-Test: Not Supported 00:36:01.428 Directives: Not Supported 00:36:01.428 NVMe-MI: Not Supported 00:36:01.428 Virtualization Management: Not Supported 00:36:01.428 Doorbell Buffer Config: Not Supported 00:36:01.428 Get LBA Status Capability: Not Supported 00:36:01.428 Command & Feature Lockdown Capability: Not Supported 00:36:01.428 Abort Command Limit: 4 00:36:01.428 Async Event Request Limit: 4 00:36:01.428 Number of Firmware Slots: N/A 00:36:01.428 Firmware Slot 1 Read-Only: N/A 00:36:01.428 Firmware Activation Without Reset: N/A 00:36:01.428 Multiple Update Detection Support: N/A 00:36:01.428 Firmware Update Granularity: No Information Provided 00:36:01.428 Per-Namespace SMART Log: Yes 00:36:01.428 Asymmetric Namespace Access Log Page: Supported 00:36:01.428 ANA Transition Time : 10 sec 00:36:01.428 00:36:01.428 Asymmetric Namespace Access Capabilities 00:36:01.428 ANA Optimized State : Supported 00:36:01.428 ANA Non-Optimized State : Supported 00:36:01.428 ANA Inaccessible State : Supported 00:36:01.428 ANA Persistent Loss State : Supported 00:36:01.428 ANA Change State : Supported 00:36:01.428 ANAGRPID is not changed : No 00:36:01.428 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:01.428 00:36:01.428 ANA Group Identifier Maximum : 128 00:36:01.428 Number of ANA Group Identifiers : 128 00:36:01.428 Max Number of Allowed Namespaces : 1024 00:36:01.428 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:01.428 Command Effects Log Page: Supported 00:36:01.428 Get Log Page Extended Data: Supported 00:36:01.428 Telemetry Log Pages: Not Supported 00:36:01.428 Persistent Event Log Pages: Not Supported 00:36:01.428 Supported Log Pages Log Page: May Support 00:36:01.428 Commands Supported & Effects Log Page: Not Supported 00:36:01.428 Feature Identifiers & Effects Log Page:May Support 00:36:01.428 NVMe-MI Commands & Effects Log Page: May Support 00:36:01.428 Data Area 4 for Telemetry Log: Not Supported 00:36:01.428 Error Log Page Entries Supported: 128 00:36:01.428 Keep Alive: Supported 00:36:01.428 Keep Alive Granularity: 1000 ms 00:36:01.428 00:36:01.428 NVM Command Set Attributes 00:36:01.428 ========================== 00:36:01.428 Submission Queue Entry Size 00:36:01.428 Max: 64 00:36:01.428 Min: 64 00:36:01.428 Completion Queue Entry Size 00:36:01.428 Max: 16 00:36:01.428 Min: 16 00:36:01.428 Number of Namespaces: 1024 00:36:01.428 Compare Command: Not Supported 00:36:01.428 Write Uncorrectable Command: Not Supported 00:36:01.428 Dataset Management Command: Supported 00:36:01.428 Write Zeroes Command: Supported 00:36:01.428 Set Features Save Field: Not Supported 00:36:01.428 Reservations: Not Supported 00:36:01.428 Timestamp: Not Supported 00:36:01.428 Copy: Not Supported 00:36:01.428 Volatile Write Cache: Present 00:36:01.428 Atomic Write Unit (Normal): 1 00:36:01.428 Atomic Write Unit (PFail): 1 00:36:01.428 Atomic Compare & Write Unit: 1 00:36:01.428 Fused Compare & Write: Not Supported 00:36:01.428 Scatter-Gather List 00:36:01.428 SGL Command Set: Supported 00:36:01.428 SGL Keyed: Not Supported 00:36:01.428 SGL Bit Bucket Descriptor: Not Supported 00:36:01.428 SGL Metadata Pointer: Not Supported 00:36:01.428 Oversized SGL: Not Supported 00:36:01.428 SGL Metadata Address: Not Supported 00:36:01.428 SGL Offset: Supported 00:36:01.428 Transport SGL Data Block: Not Supported 00:36:01.428 Replay Protected Memory Block: Not Supported 00:36:01.428 00:36:01.428 Firmware Slot Information 00:36:01.428 ========================= 00:36:01.428 Active slot: 0 00:36:01.428 00:36:01.428 Asymmetric Namespace Access 00:36:01.428 =========================== 00:36:01.428 Change Count : 0 00:36:01.428 Number of ANA Group Descriptors : 1 00:36:01.428 ANA Group Descriptor : 0 00:36:01.428 ANA Group ID : 1 00:36:01.428 Number of NSID Values : 1 00:36:01.428 Change Count : 0 00:36:01.428 ANA State : 1 00:36:01.428 Namespace Identifier : 1 00:36:01.428 00:36:01.428 Commands Supported and Effects 00:36:01.428 ============================== 00:36:01.428 Admin Commands 00:36:01.428 -------------- 00:36:01.428 Get Log Page (02h): Supported 00:36:01.428 Identify (06h): Supported 00:36:01.428 Abort (08h): Supported 00:36:01.428 Set Features (09h): Supported 00:36:01.428 Get Features (0Ah): Supported 00:36:01.428 Asynchronous Event Request (0Ch): Supported 00:36:01.428 Keep Alive (18h): Supported 00:36:01.428 I/O Commands 00:36:01.428 ------------ 00:36:01.428 Flush (00h): Supported 00:36:01.428 Write (01h): Supported LBA-Change 00:36:01.428 Read (02h): Supported 00:36:01.428 Write Zeroes (08h): Supported LBA-Change 00:36:01.428 Dataset Management (09h): Supported 00:36:01.428 00:36:01.428 Error Log 00:36:01.428 ========= 00:36:01.428 Entry: 0 00:36:01.428 Error Count: 0x3 00:36:01.428 Submission Queue Id: 0x0 00:36:01.428 Command Id: 0x5 00:36:01.428 Phase Bit: 0 00:36:01.428 Status Code: 0x2 00:36:01.428 Status Code Type: 0x0 00:36:01.428 Do Not Retry: 1 00:36:01.428 Error Location: 0x28 00:36:01.428 LBA: 0x0 00:36:01.428 Namespace: 0x0 00:36:01.428 Vendor Log Page: 0x0 00:36:01.428 ----------- 00:36:01.428 Entry: 1 00:36:01.428 Error Count: 0x2 00:36:01.428 Submission Queue Id: 0x0 00:36:01.428 Command Id: 0x5 00:36:01.428 Phase Bit: 0 00:36:01.428 Status Code: 0x2 00:36:01.428 Status Code Type: 0x0 00:36:01.428 Do Not Retry: 1 00:36:01.428 Error Location: 0x28 00:36:01.428 LBA: 0x0 00:36:01.428 Namespace: 0x0 00:36:01.428 Vendor Log Page: 0x0 00:36:01.428 ----------- 00:36:01.428 Entry: 2 00:36:01.428 Error Count: 0x1 00:36:01.428 Submission Queue Id: 0x0 00:36:01.428 Command Id: 0x4 00:36:01.428 Phase Bit: 0 00:36:01.428 Status Code: 0x2 00:36:01.428 Status Code Type: 0x0 00:36:01.428 Do Not Retry: 1 00:36:01.428 Error Location: 0x28 00:36:01.428 LBA: 0x0 00:36:01.428 Namespace: 0x0 00:36:01.428 Vendor Log Page: 0x0 00:36:01.428 00:36:01.428 Number of Queues 00:36:01.428 ================ 00:36:01.428 Number of I/O Submission Queues: 128 00:36:01.428 Number of I/O Completion Queues: 128 00:36:01.428 00:36:01.428 ZNS Specific Controller Data 00:36:01.428 ============================ 00:36:01.428 Zone Append Size Limit: 0 00:36:01.428 00:36:01.428 00:36:01.428 Active Namespaces 00:36:01.428 ================= 00:36:01.428 get_feature(0x05) failed 00:36:01.428 Namespace ID:1 00:36:01.428 Command Set Identifier: NVM (00h) 00:36:01.428 Deallocate: Supported 00:36:01.428 Deallocated/Unwritten Error: Not Supported 00:36:01.428 Deallocated Read Value: Unknown 00:36:01.428 Deallocate in Write Zeroes: Not Supported 00:36:01.428 Deallocated Guard Field: 0xFFFF 00:36:01.428 Flush: Supported 00:36:01.429 Reservation: Not Supported 00:36:01.429 Namespace Sharing Capabilities: Multiple Controllers 00:36:01.429 Size (in LBAs): 3750748848 (1788GiB) 00:36:01.429 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:01.429 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:01.429 UUID: 012c8cfc-a498-4472-9b01-ea66fb9fad15 00:36:01.429 Thin Provisioning: Not Supported 00:36:01.429 Per-NS Atomic Units: Yes 00:36:01.429 Atomic Write Unit (Normal): 8 00:36:01.429 Atomic Write Unit (PFail): 8 00:36:01.429 Preferred Write Granularity: 8 00:36:01.429 Atomic Compare & Write Unit: 8 00:36:01.429 Atomic Boundary Size (Normal): 0 00:36:01.429 Atomic Boundary Size (PFail): 0 00:36:01.429 Atomic Boundary Offset: 0 00:36:01.429 NGUID/EUI64 Never Reused: No 00:36:01.429 ANA group ID: 1 00:36:01.429 Namespace Write Protected: No 00:36:01.429 Number of LBA Formats: 1 00:36:01.429 Current LBA Format: LBA Format #00 00:36:01.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:01.429 00:36:01.429 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:01.429 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:01.429 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:01.429 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:01.429 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:01.429 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.429 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:01.429 rmmod nvme_tcp 00:36:01.690 rmmod nvme_fabrics 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.690 15:53:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.601 15:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:03.602 15:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:03.602 15:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:03.602 15:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:36:03.602 15:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:03.602 15:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:03.602 15:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:03.602 15:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:03.602 15:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:36:03.602 15:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:36:03.862 15:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:07.164 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:07.164 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:07.164 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:07.164 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:07.164 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:07.164 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:07.164 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:07.424 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:07.425 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:07.995 00:36:07.995 real 0m19.780s 00:36:07.995 user 0m5.534s 00:36:07.995 sys 0m11.207s 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:07.995 ************************************ 00:36:07.995 END TEST nvmf_identify_kernel_target 00:36:07.995 ************************************ 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.995 ************************************ 00:36:07.995 START TEST nvmf_auth_host 00:36:07.995 ************************************ 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:07.995 * Looking for test storage... 00:36:07.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:07.995 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:07.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.996 --rc genhtml_branch_coverage=1 00:36:07.996 --rc genhtml_function_coverage=1 00:36:07.996 --rc genhtml_legend=1 00:36:07.996 --rc geninfo_all_blocks=1 00:36:07.996 --rc geninfo_unexecuted_blocks=1 00:36:07.996 00:36:07.996 ' 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:07.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.996 --rc genhtml_branch_coverage=1 00:36:07.996 --rc genhtml_function_coverage=1 00:36:07.996 --rc genhtml_legend=1 00:36:07.996 --rc geninfo_all_blocks=1 00:36:07.996 --rc geninfo_unexecuted_blocks=1 00:36:07.996 00:36:07.996 ' 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:07.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.996 --rc genhtml_branch_coverage=1 00:36:07.996 --rc genhtml_function_coverage=1 00:36:07.996 --rc genhtml_legend=1 00:36:07.996 --rc geninfo_all_blocks=1 00:36:07.996 --rc geninfo_unexecuted_blocks=1 00:36:07.996 00:36:07.996 ' 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:07.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.996 --rc genhtml_branch_coverage=1 00:36:07.996 --rc genhtml_function_coverage=1 00:36:07.996 --rc genhtml_legend=1 00:36:07.996 --rc geninfo_all_blocks=1 00:36:07.996 --rc geninfo_unexecuted_blocks=1 00:36:07.996 00:36:07.996 ' 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.996 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:07.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.997 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:08.257 15:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:16.402 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:16.402 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:16.402 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:16.403 Found net devices under 0000:31:00.0: cvl_0_0 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:16.403 Found net devices under 0000:31:00.1: cvl_0_1 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:16.403 15:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:16.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:16.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:36:16.403 00:36:16.403 --- 10.0.0.2 ping statistics --- 00:36:16.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.403 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:16.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:16.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:36:16.403 00:36:16.403 --- 10.0.0.1 ping statistics --- 00:36:16.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.403 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=3375088 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 3375088 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3375088 ']' 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.403 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=1641c8dc02fa7a296c105c15ea34b1e9 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.tXo 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 1641c8dc02fa7a296c105c15ea34b1e9 0 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 1641c8dc02fa7a296c105c15ea34b1e9 0 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=1641c8dc02fa7a296c105c15ea34b1e9 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.tXo 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.tXo 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tXo 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c878dcf87f8c67ceece7903352bbea9bb49edde64a05272f3c44941e0e76d385 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.jRy 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c878dcf87f8c67ceece7903352bbea9bb49edde64a05272f3c44941e0e76d385 3 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c878dcf87f8c67ceece7903352bbea9bb49edde64a05272f3c44941e0e76d385 3 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c878dcf87f8c67ceece7903352bbea9bb49edde64a05272f3c44941e0e76d385 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.jRy 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.jRy 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.jRy 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7775a782101a9d0752752a19b10d278c9bc2e4ed35590d07 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.LL6 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7775a782101a9d0752752a19b10d278c9bc2e4ed35590d07 0 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7775a782101a9d0752752a19b10d278c9bc2e4ed35590d07 0 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7775a782101a9d0752752a19b10d278c9bc2e4ed35590d07 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.LL6 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.LL6 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LL6 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4751ce8254851d4c4b24c5cbcd8f7ad9e061cedd68dfa16f 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.gMv 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4751ce8254851d4c4b24c5cbcd8f7ad9e061cedd68dfa16f 2 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4751ce8254851d4c4b24c5cbcd8f7ad9e061cedd68dfa16f 2 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4751ce8254851d4c4b24c5cbcd8f7ad9e061cedd68dfa16f 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.gMv 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.gMv 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gMv 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d524bb67f08284c8fcfbecfbb18aa814 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.MKD 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d524bb67f08284c8fcfbecfbb18aa814 1 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d524bb67f08284c8fcfbecfbb18aa814 1 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.404 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d524bb67f08284c8fcfbecfbb18aa814 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.MKD 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.MKD 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.MKD 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:36:16.405 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=f9b3d61d0c5012e4819c364e60213b1c 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.hGO 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key f9b3d61d0c5012e4819c364e60213b1c 1 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 f9b3d61d0c5012e4819c364e60213b1c 1 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=f9b3d61d0c5012e4819c364e60213b1c 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.hGO 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.hGO 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hGO 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=755c620b97d6fbef08b7522568da6a9f9bd7054981a581ff 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.j9r 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 755c620b97d6fbef08b7522568da6a9f9bd7054981a581ff 2 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 755c620b97d6fbef08b7522568da6a9f9bd7054981a581ff 2 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=755c620b97d6fbef08b7522568da6a9f9bd7054981a581ff 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.j9r 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.j9r 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.j9r 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ff54639df05b23bc8ffc7056abc78dbe 00:36:16.667 15:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:36:16.667 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.NEK 00:36:16.667 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ff54639df05b23bc8ffc7056abc78dbe 0 00:36:16.667 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ff54639df05b23bc8ffc7056abc78dbe 0 00:36:16.667 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.667 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.667 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ff54639df05b23bc8ffc7056abc78dbe 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.NEK 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.NEK 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.NEK 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c4c9d79f0311025698ab538a833fff7c00cb52b21819061d9b762899a1095b7e 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.3EI 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c4c9d79f0311025698ab538a833fff7c00cb52b21819061d9b762899a1095b7e 3 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c4c9d79f0311025698ab538a833fff7c00cb52b21819061d9b762899a1095b7e 3 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c4c9d79f0311025698ab538a833fff7c00cb52b21819061d9b762899a1095b7e 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:16.668 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.3EI 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.3EI 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3EI 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3375088 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3375088 ']' 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tXo 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.jRy ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jRy 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LL6 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gMv ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gMv 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.MKD 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hGO ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hGO 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.j9r 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.930 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.NEK ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.NEK 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3EI 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:17.192 15:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:20.498 Waiting for block devices as requested 00:36:20.759 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:20.759 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:20.759 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:21.019 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:21.019 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:21.019 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:21.280 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:21.280 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:21.280 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:21.540 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:21.540 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:21.540 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:21.801 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:21.801 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:21.801 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:22.062 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:22.062 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:23.004 No valid GPT data, bailing 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:23.004 00:36:23.004 Discovery Log Number of Records 2, Generation counter 2 00:36:23.004 =====Discovery Log Entry 0====== 00:36:23.004 trtype: tcp 00:36:23.004 adrfam: ipv4 00:36:23.004 subtype: current discovery subsystem 00:36:23.004 treq: not specified, sq flow control disable supported 00:36:23.004 portid: 1 00:36:23.004 trsvcid: 4420 00:36:23.004 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:23.004 traddr: 10.0.0.1 00:36:23.004 eflags: none 00:36:23.004 sectype: none 00:36:23.004 =====Discovery Log Entry 1====== 00:36:23.004 trtype: tcp 00:36:23.004 adrfam: ipv4 00:36:23.004 subtype: nvme subsystem 00:36:23.004 treq: not specified, sq flow control disable supported 00:36:23.004 portid: 1 00:36:23.004 trsvcid: 4420 00:36:23.004 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:23.004 traddr: 10.0.0.1 00:36:23.004 eflags: none 00:36:23.004 sectype: none 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:23.004 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.005 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.005 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:23.005 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:23.005 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.266 nvme0n1 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.266 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.267 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.528 nvme0n1 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.528 15:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.789 nvme0n1 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:23.789 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:23.790 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.790 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.790 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.052 nvme0n1 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.052 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.314 nvme0n1 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:24.314 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:24.315 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:24.315 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.315 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.575 nvme0n1 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.576 15:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.837 nvme0n1 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.837 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.098 nvme0n1 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.098 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 nvme0n1 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:25.360 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.361 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.622 nvme0n1 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.622 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.623 15:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.884 nvme0n1 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.884 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.147 nvme0n1 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.147 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.409 nvme0n1 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:26.409 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.670 15:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.933 nvme0n1 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.933 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.194 nvme0n1 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.195 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.460 nvme0n1 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:27.460 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.461 15:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.031 nvme0n1 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.031 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.032 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.603 nvme0n1 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.603 15:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.865 nvme0n1 00:36:28.865 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.865 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.865 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.865 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.128 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.390 nvme0n1 00:36:29.390 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.390 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.390 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.390 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.390 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.390 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.651 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.652 15:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.912 nvme0n1 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.912 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.173 15:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.745 nvme0n1 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.745 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.317 nvme0n1 00:36:31.317 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.317 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.317 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.317 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.317 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.579 15:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.153 nvme0n1 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.153 15:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.725 nvme0n1 00:36:32.725 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.725 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.725 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.725 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.725 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.985 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.557 nvme0n1 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.557 15:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.818 nvme0n1 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:33.818 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.819 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.080 nvme0n1 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.080 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.343 nvme0n1 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.343 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.604 nvme0n1 00:36:34.604 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.604 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.604 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.604 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.604 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.605 15:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.867 nvme0n1 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.867 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.128 nvme0n1 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:35.128 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.129 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.390 nvme0n1 00:36:35.390 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.390 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.390 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.390 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.390 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.390 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.390 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.391 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.652 nvme0n1 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.652 15:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.914 nvme0n1 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.914 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.176 nvme0n1 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.176 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.438 nvme0n1 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.438 15:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.699 nvme0n1 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.699 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.960 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.221 nvme0n1 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.221 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.483 nvme0n1 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.483 15:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.745 nvme0n1 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.745 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.319 nvme0n1 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.319 15:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.893 nvme0n1 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.893 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.154 nvme0n1 00:36:39.154 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.154 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.154 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.154 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.154 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.154 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.414 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:39.415 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:39.415 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:39.415 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:39.415 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.415 15:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.674 nvme0n1 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.674 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.935 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.196 nvme0n1 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.196 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.457 15:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.028 nvme0n1 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.028 15:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.598 nvme0n1 00:36:41.598 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.598 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.598 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.598 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.598 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.598 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.860 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.432 nvme0n1 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:42.432 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:42.433 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:42.433 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.433 15:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.375 nvme0n1 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.375 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:43.376 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.376 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:43.376 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:43.376 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:43.376 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:43.376 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.376 15:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.946 nvme0n1 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.946 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.947 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.207 nvme0n1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.207 nvme0n1 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.207 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.468 nvme0n1 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.468 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.729 15:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.729 nvme0n1 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:44.729 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.730 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.730 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.730 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.991 nvme0n1 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:44.991 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.992 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.253 nvme0n1 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.253 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.254 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.516 nvme0n1 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.516 15:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.798 nvme0n1 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.798 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.118 nvme0n1 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.118 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.421 nvme0n1 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:46.421 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:46.422 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:46.422 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:46.422 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.422 15:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.738 nvme0n1 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:46.738 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.739 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.002 nvme0n1 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.002 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.262 nvme0n1 00:36:47.262 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.262 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.262 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.262 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.262 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.262 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:47.522 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.523 15:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.784 nvme0n1 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.784 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.044 nvme0n1 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.044 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.614 nvme0n1 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.614 15:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.185 nvme0n1 00:36:49.185 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.185 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.185 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.185 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.185 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.185 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.186 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.446 nvme0n1 00:36:49.446 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.446 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.446 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.446 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.446 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.446 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.705 15:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.964 nvme0n1 00:36:49.964 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.964 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.964 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.964 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.964 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.964 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.964 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.965 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.965 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.965 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.225 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.485 nvme0n1 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY0MWM4ZGMwMmZhN2EyOTZjMTA1YzE1ZWEzNGIxZTnG5rKp: 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg3OGRjZjg3ZjhjNjdjZWVjZTc5MDMzNTJiYmVhOWJiNDllZGRlNjRhMDUyNzJmM2M0NDk0MWUwZTc2ZDM4NTILcl4=: 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.485 15:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.425 nvme0n1 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.425 15:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.998 nvme0n1 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.998 15:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.568 nvme0n1 00:36:52.829 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.829 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.829 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.829 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.829 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU1YzYyMGI5N2Q2ZmJlZjA4Yjc1MjI1NjhkYTZhOWY5YmQ3MDU0OTgxYTU4MWZmJv0Sxw==: 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1NDYzOWRmMDViMjNiYzhmZmM3MDU2YWJjNzhkYmX1HK+l: 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.830 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.401 nvme0n1 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:53.401 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRjOWQ3OWYwMzExMDI1Njk4YWI1MzhhODMzZmZmN2MwMGNiNTJiMjE4MTkwNjFkOWI3NjI4OTlhMTA5NWI3ZWiMYEQ=: 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.402 15:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.345 nvme0n1 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.345 request: 00:36:54.345 { 00:36:54.345 "name": "nvme0", 00:36:54.345 "trtype": "tcp", 00:36:54.345 "traddr": "10.0.0.1", 00:36:54.345 "adrfam": "ipv4", 00:36:54.345 "trsvcid": "4420", 00:36:54.345 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:54.345 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:54.345 "prchk_reftag": false, 00:36:54.345 "prchk_guard": false, 00:36:54.345 "hdgst": false, 00:36:54.345 "ddgst": false, 00:36:54.345 "allow_unrecognized_csi": false, 00:36:54.345 "method": "bdev_nvme_attach_controller", 00:36:54.345 "req_id": 1 00:36:54.345 } 00:36:54.345 Got JSON-RPC error response 00:36:54.345 response: 00:36:54.345 { 00:36:54.345 "code": -5, 00:36:54.345 "message": "Input/output error" 00:36:54.345 } 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.345 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.345 request: 00:36:54.345 { 00:36:54.345 "name": "nvme0", 00:36:54.345 "trtype": "tcp", 00:36:54.345 "traddr": "10.0.0.1", 00:36:54.345 "adrfam": "ipv4", 00:36:54.345 "trsvcid": "4420", 00:36:54.345 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:54.345 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:54.345 "prchk_reftag": false, 00:36:54.345 "prchk_guard": false, 00:36:54.345 "hdgst": false, 00:36:54.345 "ddgst": false, 00:36:54.345 "dhchap_key": "key2", 00:36:54.345 "allow_unrecognized_csi": false, 00:36:54.346 "method": "bdev_nvme_attach_controller", 00:36:54.346 "req_id": 1 00:36:54.346 } 00:36:54.346 Got JSON-RPC error response 00:36:54.346 response: 00:36:54.346 { 00:36:54.346 "code": -5, 00:36:54.346 "message": "Input/output error" 00:36:54.346 } 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.346 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.606 request: 00:36:54.606 { 00:36:54.606 "name": "nvme0", 00:36:54.606 "trtype": "tcp", 00:36:54.606 "traddr": "10.0.0.1", 00:36:54.606 "adrfam": "ipv4", 00:36:54.606 "trsvcid": "4420", 00:36:54.606 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:54.606 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:54.606 "prchk_reftag": false, 00:36:54.606 "prchk_guard": false, 00:36:54.606 "hdgst": false, 00:36:54.606 "ddgst": false, 00:36:54.606 "dhchap_key": "key1", 00:36:54.606 "dhchap_ctrlr_key": "ckey2", 00:36:54.606 "allow_unrecognized_csi": false, 00:36:54.606 "method": "bdev_nvme_attach_controller", 00:36:54.606 "req_id": 1 00:36:54.606 } 00:36:54.606 Got JSON-RPC error response 00:36:54.606 response: 00:36:54.606 { 00:36:54.606 "code": -5, 00:36:54.606 "message": "Input/output error" 00:36:54.606 } 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.606 15:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.606 nvme0n1 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.606 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.868 request: 00:36:54.868 { 00:36:54.868 "name": "nvme0", 00:36:54.868 "dhchap_key": "key1", 00:36:54.868 "dhchap_ctrlr_key": "ckey2", 00:36:54.868 "method": "bdev_nvme_set_keys", 00:36:54.868 "req_id": 1 00:36:54.868 } 00:36:54.868 Got JSON-RPC error response 00:36:54.868 response: 00:36:54.868 { 00:36:54.868 "code": -13, 00:36:54.868 "message": "Permission denied" 00:36:54.868 } 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:54.868 15:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:56.252 15:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.252 15:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:56.252 15:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.252 15:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.252 15:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.252 15:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:56.252 15:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzc3NWE3ODIxMDFhOWQwNzUyNzUyYTE5YjEwZDI3OGM5YmMyZTRlZDM1NTkwZDA3x1flIw==: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc1MWNlODI1NDg1MWQ0YzRiMjRjNWNiY2Q4ZjdhZDllMDYxY2VkZDY4ZGZhMTZmm79czg==: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.195 nvme0n1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDUyNGJiNjdmMDgyODRjOGZjZmJlY2ZiYjE4YWE4MTTBLwZ6: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjliM2Q2MWQwYzUwMTJlNDgxOWMzNjRlNjAyMTNiMWPZSZoL: 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.195 request: 00:36:57.195 { 00:36:57.195 "name": "nvme0", 00:36:57.195 "dhchap_key": "key2", 00:36:57.195 "dhchap_ctrlr_key": "ckey1", 00:36:57.195 "method": "bdev_nvme_set_keys", 00:36:57.195 "req_id": 1 00:36:57.195 } 00:36:57.195 Got JSON-RPC error response 00:36:57.195 response: 00:36:57.195 { 00:36:57.195 "code": -13, 00:36:57.195 "message": "Permission denied" 00:36:57.195 } 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:57.195 15:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:58.581 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.581 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:58.581 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:58.582 rmmod nvme_tcp 00:36:58.582 rmmod nvme_fabrics 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 3375088 ']' 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 3375088 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3375088 ']' 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3375088 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3375088 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3375088' 00:36:58.582 killing process with pid 3375088 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3375088 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3375088 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:58.582 15:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.127 15:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:01.127 15:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:37:01.127 15:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:04.432 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:04.432 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:04.432 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:04.432 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:04.433 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:05.005 15:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tXo /tmp/spdk.key-null.LL6 /tmp/spdk.key-sha256.MKD /tmp/spdk.key-sha384.j9r /tmp/spdk.key-sha512.3EI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:05.005 15:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:09.214 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:09.214 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:09.214 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:09.215 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:09.215 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:09.215 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:09.215 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:09.215 00:37:09.215 real 1m0.970s 00:37:09.215 user 0m54.248s 00:37:09.215 sys 0m16.699s 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.215 ************************************ 00:37:09.215 END TEST nvmf_auth_host 00:37:09.215 ************************************ 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.215 ************************************ 00:37:09.215 START TEST nvmf_digest 00:37:09.215 ************************************ 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:09.215 * Looking for test storage... 00:37:09.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.215 --rc genhtml_branch_coverage=1 00:37:09.215 --rc genhtml_function_coverage=1 00:37:09.215 --rc genhtml_legend=1 00:37:09.215 --rc geninfo_all_blocks=1 00:37:09.215 --rc geninfo_unexecuted_blocks=1 00:37:09.215 00:37:09.215 ' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.215 --rc genhtml_branch_coverage=1 00:37:09.215 --rc genhtml_function_coverage=1 00:37:09.215 --rc genhtml_legend=1 00:37:09.215 --rc geninfo_all_blocks=1 00:37:09.215 --rc geninfo_unexecuted_blocks=1 00:37:09.215 00:37:09.215 ' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.215 --rc genhtml_branch_coverage=1 00:37:09.215 --rc genhtml_function_coverage=1 00:37:09.215 --rc genhtml_legend=1 00:37:09.215 --rc geninfo_all_blocks=1 00:37:09.215 --rc geninfo_unexecuted_blocks=1 00:37:09.215 00:37:09.215 ' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.215 --rc genhtml_branch_coverage=1 00:37:09.215 --rc genhtml_function_coverage=1 00:37:09.215 --rc genhtml_legend=1 00:37:09.215 --rc geninfo_all_blocks=1 00:37:09.215 --rc geninfo_unexecuted_blocks=1 00:37:09.215 00:37:09.215 ' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:09.215 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:09.216 15:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:17.356 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:17.356 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:17.356 Found net devices under 0000:31:00.0: cvl_0_0 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:17.356 Found net devices under 0000:31:00.1: cvl_0_1 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:17.356 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:17.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:17.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:37:17.357 00:37:17.357 --- 10.0.0.2 ping statistics --- 00:37:17.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.357 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:17.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:17.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:37:17.357 00:37:17.357 --- 10.0.0.1 ping statistics --- 00:37:17.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.357 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:17.357 15:54:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.357 ************************************ 00:37:17.357 START TEST nvmf_digest_clean 00:37:17.357 ************************************ 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=3392587 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 3392587 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3392587 ']' 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:17.357 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:17.357 [2024-10-01 15:54:56.116172] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:17.357 [2024-10-01 15:54:56.116233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.357 [2024-10-01 15:54:56.157660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:17.357 [2024-10-01 15:54:56.207193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.357 [2024-10-01 15:54:56.252847] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.357 [2024-10-01 15:54:56.252907] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.357 [2024-10-01 15:54:56.252922] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.357 [2024-10-01 15:54:56.252932] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.357 [2024-10-01 15:54:56.252939] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.357 [2024-10-01 15:54:56.252967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.617 15:54:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:17.617 null0 00:37:17.617 [2024-10-01 15:54:57.039487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.617 [2024-10-01 15:54:57.063698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:17.617 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3392813 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3392813 /var/tmp/bperf.sock 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3392813 ']' 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:17.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:17.877 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:17.877 [2024-10-01 15:54:57.118689] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:17.877 [2024-10-01 15:54:57.118736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392813 ] 00:37:17.877 [2024-10-01 15:54:57.148734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:17.877 [2024-10-01 15:54:57.198028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.877 [2024-10-01 15:54:57.229972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.817 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:18.817 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:18.817 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:18.817 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:18.817 15:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:18.817 15:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.817 15:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:19.078 nvme0n1 00:37:19.078 15:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:19.078 15:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.078 Running I/O for 2 seconds... 00:37:21.403 19019.00 IOPS, 74.29 MiB/s 20399.50 IOPS, 79.69 MiB/s 00:37:21.403 Latency(us) 00:37:21.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.403 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:21.403 nvme0n1 : 2.00 20424.37 79.78 0.00 0.00 6259.58 2402.99 22937.60 00:37:21.403 =================================================================================================================== 00:37:21.403 Total : 20424.37 79.78 0.00 0.00 6259.58 2402.99 22937.60 00:37:21.403 { 00:37:21.403 "results": [ 00:37:21.403 { 00:37:21.403 "job": "nvme0n1", 00:37:21.403 "core_mask": "0x2", 00:37:21.403 "workload": "randread", 00:37:21.403 "status": "finished", 00:37:21.403 "queue_depth": 128, 00:37:21.403 "io_size": 4096, 00:37:21.403 "runtime": 2.003832, 00:37:21.403 "iops": 20424.3669129947, 00:37:21.403 "mibps": 79.78268325388555, 00:37:21.403 "io_failed": 0, 00:37:21.403 "io_timeout": 0, 00:37:21.403 "avg_latency_us": 6259.5787843395965, 00:37:21.403 "min_latency_us": 2402.9866666666667, 00:37:21.403 "max_latency_us": 22937.6 00:37:21.403 } 00:37:21.403 ], 00:37:21.403 "core_count": 1 00:37:21.403 } 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:21.403 | select(.opcode=="crc32c") 00:37:21.403 | "\(.module_name) \(.executed)"' 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3392813 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3392813 ']' 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3392813 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392813 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392813' 00:37:21.403 killing process with pid 3392813 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3392813 00:37:21.403 Received shutdown signal, test time was about 2.000000 seconds 00:37:21.403 00:37:21.403 Latency(us) 00:37:21.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.403 =================================================================================================================== 00:37:21.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.403 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3392813 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3393496 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3393496 /var/tmp/bperf.sock 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3393496 ']' 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:21.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:21.663 15:55:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:21.663 [2024-10-01 15:55:00.977041] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:21.663 [2024-10-01 15:55:00.977110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393496 ] 00:37:21.663 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:21.663 Zero copy mechanism will not be used. 00:37:21.663 [2024-10-01 15:55:01.008316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:21.663 [2024-10-01 15:55:01.057375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.663 [2024-10-01 15:55:01.083588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.606 15:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:22.606 15:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:22.606 15:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:22.606 15:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:22.606 15:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:22.606 15:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:22.606 15:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:22.867 nvme0n1 00:37:22.867 15:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:22.867 15:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:23.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:23.128 Zero copy mechanism will not be used. 00:37:23.128 Running I/O for 2 seconds... 00:37:25.012 2941.00 IOPS, 367.62 MiB/s 3626.00 IOPS, 453.25 MiB/s 00:37:25.012 Latency(us) 00:37:25.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.012 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:25.012 nvme0n1 : 2.01 3622.47 452.81 0.00 0.00 4414.10 761.17 12397.23 00:37:25.012 =================================================================================================================== 00:37:25.012 Total : 3622.47 452.81 0.00 0.00 4414.10 761.17 12397.23 00:37:25.012 { 00:37:25.012 "results": [ 00:37:25.012 { 00:37:25.012 "job": "nvme0n1", 00:37:25.012 "core_mask": "0x2", 00:37:25.012 "workload": "randread", 00:37:25.012 "status": "finished", 00:37:25.012 "queue_depth": 16, 00:37:25.012 "io_size": 131072, 00:37:25.012 "runtime": 2.006364, 00:37:25.012 "iops": 3622.473289991248, 00:37:25.012 "mibps": 452.809161248906, 00:37:25.012 "io_failed": 0, 00:37:25.012 "io_timeout": 0, 00:37:25.012 "avg_latency_us": 4414.102190423775, 00:37:25.012 "min_latency_us": 761.1733333333333, 00:37:25.012 "max_latency_us": 12397.226666666667 00:37:25.012 } 00:37:25.012 ], 00:37:25.012 "core_count": 1 00:37:25.012 } 00:37:25.012 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:25.012 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:25.012 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:25.012 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:25.012 | select(.opcode=="crc32c") 00:37:25.012 | "\(.module_name) \(.executed)"' 00:37:25.012 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3393496 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3393496 ']' 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3393496 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3393496 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3393496' 00:37:25.273 killing process with pid 3393496 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3393496 00:37:25.273 Received shutdown signal, test time was about 2.000000 seconds 00:37:25.273 00:37:25.273 Latency(us) 00:37:25.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.273 =================================================================================================================== 00:37:25.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:25.273 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3393496 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3394188 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3394188 /var/tmp/bperf.sock 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3394188 ']' 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:25.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:25.535 [2024-10-01 15:55:04.812740] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:25.535 [2024-10-01 15:55:04.812794] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394188 ] 00:37:25.535 [2024-10-01 15:55:04.843091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:25.535 [2024-10-01 15:55:04.890053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.535 [2024-10-01 15:55:04.918014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:25.535 15:55:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:25.796 15:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.796 15:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:26.368 nvme0n1 00:37:26.368 15:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:26.368 15:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:26.368 Running I/O for 2 seconds... 00:37:28.253 30520.00 IOPS, 119.22 MiB/s 30612.00 IOPS, 119.58 MiB/s 00:37:28.253 Latency(us) 00:37:28.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.253 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:28.253 nvme0n1 : 2.00 30630.70 119.65 0.00 0.00 4174.99 2143.57 13544.11 00:37:28.253 =================================================================================================================== 00:37:28.253 Total : 30630.70 119.65 0.00 0.00 4174.99 2143.57 13544.11 00:37:28.253 { 00:37:28.253 "results": [ 00:37:28.253 { 00:37:28.253 "job": "nvme0n1", 00:37:28.253 "core_mask": "0x2", 00:37:28.253 "workload": "randwrite", 00:37:28.253 "status": "finished", 00:37:28.253 "queue_depth": 128, 00:37:28.253 "io_size": 4096, 00:37:28.253 "runtime": 2.002958, 00:37:28.253 "iops": 30630.69719884291, 00:37:28.253 "mibps": 119.65116093298012, 00:37:28.253 "io_failed": 0, 00:37:28.253 "io_timeout": 0, 00:37:28.253 "avg_latency_us": 4174.9861311774675, 00:37:28.253 "min_latency_us": 2143.5733333333333, 00:37:28.253 "max_latency_us": 13544.106666666667 00:37:28.253 } 00:37:28.253 ], 00:37:28.253 "core_count": 1 00:37:28.253 } 00:37:28.253 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:28.253 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:28.253 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:28.253 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:28.253 | select(.opcode=="crc32c") 00:37:28.253 | "\(.module_name) \(.executed)"' 00:37:28.253 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3394188 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3394188 ']' 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3394188 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394188 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394188' 00:37:28.514 killing process with pid 3394188 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3394188 00:37:28.514 Received shutdown signal, test time was about 2.000000 seconds 00:37:28.514 00:37:28.514 Latency(us) 00:37:28.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.514 =================================================================================================================== 00:37:28.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:28.514 15:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3394188 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3394860 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3394860 /var/tmp/bperf.sock 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3394860 ']' 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:28.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:28.775 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:28.775 [2024-10-01 15:55:08.116290] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:28.775 [2024-10-01 15:55:08.116377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394860 ] 00:37:28.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:28.775 Zero copy mechanism will not be used. 00:37:28.776 [2024-10-01 15:55:08.148131] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:28.776 [2024-10-01 15:55:08.196545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.776 [2024-10-01 15:55:08.224168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.718 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:29.718 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:29.718 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:29.718 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:29.718 15:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:29.718 15:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.718 15:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.979 nvme0n1 00:37:29.979 15:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:29.979 15:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:29.979 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:29.979 Zero copy mechanism will not be used. 00:37:29.979 Running I/O for 2 seconds... 00:37:32.308 3599.00 IOPS, 449.88 MiB/s 3864.50 IOPS, 483.06 MiB/s 00:37:32.308 Latency(us) 00:37:32.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.308 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:32.308 nvme0n1 : 2.01 3864.50 483.06 0.00 0.00 4134.28 1201.49 8355.84 00:37:32.308 =================================================================================================================== 00:37:32.308 Total : 3864.50 483.06 0.00 0.00 4134.28 1201.49 8355.84 00:37:32.308 { 00:37:32.308 "results": [ 00:37:32.308 { 00:37:32.308 "job": "nvme0n1", 00:37:32.308 "core_mask": "0x2", 00:37:32.308 "workload": "randwrite", 00:37:32.308 "status": "finished", 00:37:32.308 "queue_depth": 16, 00:37:32.308 "io_size": 131072, 00:37:32.308 "runtime": 2.005175, 00:37:32.308 "iops": 3864.5006046853764, 00:37:32.308 "mibps": 483.06257558567205, 00:37:32.308 "io_failed": 0, 00:37:32.308 "io_timeout": 0, 00:37:32.308 "avg_latency_us": 4134.279636942401, 00:37:32.308 "min_latency_us": 1201.4933333333333, 00:37:32.308 "max_latency_us": 8355.84 00:37:32.308 } 00:37:32.308 ], 00:37:32.308 "core_count": 1 00:37:32.308 } 00:37:32.308 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:32.308 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:32.308 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:32.308 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:32.308 | select(.opcode=="crc32c") 00:37:32.308 | "\(.module_name) \(.executed)"' 00:37:32.308 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:32.308 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3394860 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3394860 ']' 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3394860 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394860 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394860' 00:37:32.309 killing process with pid 3394860 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3394860 00:37:32.309 Received shutdown signal, test time was about 2.000000 seconds 00:37:32.309 00:37:32.309 Latency(us) 00:37:32.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.309 =================================================================================================================== 00:37:32.309 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.309 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3394860 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3392587 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3392587 ']' 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3392587 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392587 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392587' 00:37:32.570 killing process with pid 3392587 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3392587 00:37:32.570 15:55:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3392587 00:37:32.570 00:37:32.570 real 0m15.953s 00:37:32.570 user 0m31.580s 00:37:32.570 sys 0m3.508s 00:37:32.570 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:32.570 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:32.570 ************************************ 00:37:32.570 END TEST nvmf_digest_clean 00:37:32.570 ************************************ 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:32.831 ************************************ 00:37:32.831 START TEST nvmf_digest_error 00:37:32.831 ************************************ 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=3395603 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 3395603 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3395603 ']' 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:32.831 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.831 [2024-10-01 15:55:12.142619] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:32.831 [2024-10-01 15:55:12.142668] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.831 [2024-10-01 15:55:12.178395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:32.831 [2024-10-01 15:55:12.224830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.831 [2024-10-01 15:55:12.252646] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.831 [2024-10-01 15:55:12.252677] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.831 [2024-10-01 15:55:12.252686] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:32.831 [2024-10-01 15:55:12.252692] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:32.831 [2024-10-01 15:55:12.252697] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:32.831 [2024-10-01 15:55:12.252716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.772 [2024-10-01 15:55:12.966721] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.772 15:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.772 null0 00:37:33.772 [2024-10-01 15:55:13.038805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.772 [2024-10-01 15:55:13.063012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3395918 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3395918 /var/tmp/bperf.sock 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3395918 ']' 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:33.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:33.772 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.772 [2024-10-01 15:55:13.129259] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:33.772 [2024-10-01 15:55:13.129307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395918 ] 00:37:33.772 [2024-10-01 15:55:13.159589] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:33.772 [2024-10-01 15:55:13.205338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.031 [2024-10-01 15:55:13.233819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.602 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:34.602 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:34.602 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:34.602 15:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:34.862 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:34.862 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.862 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.862 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.862 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:34.862 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:35.122 nvme0n1 00:37:35.122 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:35.122 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.122 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:35.122 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.122 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:35.122 15:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:35.122 Running I/O for 2 seconds... 00:37:35.123 [2024-10-01 15:55:14.486609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.486640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.486649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.499120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.499141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.499149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.510435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.510454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.510462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.518413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.518432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.518440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.528010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.528028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.528035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.536749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.536769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.536780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.545176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.545201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.554090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.554108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.554115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.563573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.563590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.123 [2024-10-01 15:55:14.572847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.123 [2024-10-01 15:55:14.572864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.123 [2024-10-01 15:55:14.572871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.582201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.582219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.582225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.590427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.590444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.590451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.599522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.599539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.599546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.607960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.607978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.607984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.617638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.617659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.617666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.625400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.625417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.625424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.634554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.634571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.634578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.644050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.384 [2024-10-01 15:55:14.644068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.384 [2024-10-01 15:55:14.644074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.384 [2024-10-01 15:55:14.654712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.654729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.654735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.662978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.662996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.663002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.672073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.672091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.672099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.679869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.679886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.679897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.689345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.689362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.689369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.698186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.698203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.698209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.706372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.706390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.706396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.717728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.717746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.717752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.729623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.729640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.729647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.738639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.738657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.738663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.746610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.746630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.746638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.755452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.755469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.755475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.764840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.764857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.764864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.773710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.773731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.773738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.781756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.781774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.781780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.791036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.791053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.791060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.800049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.800067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.800073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.810100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.810117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.810124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.817564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.817581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.817588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.827302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.827319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.827325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.385 [2024-10-01 15:55:14.836411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.385 [2024-10-01 15:55:14.836428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.385 [2024-10-01 15:55:14.836434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.845010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.845027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.845033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.854230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.854248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.854254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.863710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.863727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.863734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.872595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.872612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.872618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.881261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.881278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.889810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.889826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.889833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.898518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.898535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.898542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.907965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.907982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.907988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.916181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.916197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.916204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.925800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.925817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.925828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.933704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.933721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.933727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.942855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.648 [2024-10-01 15:55:14.942872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.648 [2024-10-01 15:55:14.942878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.648 [2024-10-01 15:55:14.952311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:14.952328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:14.952334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:14.961267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:14.961284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:14.961290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:14.971217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:14.971234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:14.971240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:14.980076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:14.980093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:14.980099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:14.988662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:14.988679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:14.988685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:14.998507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:14.998524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:14.998530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.008241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.008261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.008268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.016561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.016578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.016584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.025078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.025095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.025102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.034006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.034023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.034029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.043129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.043145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.043152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.052539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.052556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.052562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.060993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.061010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.061016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.070398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.070415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.070421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.077827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.077850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.088024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.088041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.088048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.649 [2024-10-01 15:55:15.097925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.649 [2024-10-01 15:55:15.097942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.649 [2024-10-01 15:55:15.097948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.106615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.106632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.106639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.115218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.115235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.115241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.125976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.125994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.126001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.135849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.135865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.135872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.144622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.144639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.144646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.154364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.154381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.154387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.162378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.162395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.162405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.171298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.171315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.171321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.180802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.180818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.180825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.190528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.190545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.190551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.198736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.198752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.198759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.209033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.209050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.209057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.217381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.217398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.217405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.227395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.227412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.227419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.235619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.235636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.235642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.244523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.244544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.244550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.254128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.254147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.918 [2024-10-01 15:55:15.254153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.918 [2024-10-01 15:55:15.262765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.918 [2024-10-01 15:55:15.262783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.262789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.271498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.271515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.271522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.281320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.281337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.281344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.288497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.288514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.288520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.298364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.298381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.298388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.307870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.307886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.307896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.317807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.317824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.317830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.325318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.325335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.325341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.334475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.334491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.334498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.343348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.343364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.343370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.352667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.352684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.352690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.360420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.360437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.360443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.919 [2024-10-01 15:55:15.371210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:35.919 [2024-10-01 15:55:15.371227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.919 [2024-10-01 15:55:15.371233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.381249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.381267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.381273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.389538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.389555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.389562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.399129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.399146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.399156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.409130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.409146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.409153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.417022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.417039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.417045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.426233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.426250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.426256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.435283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.435299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.435305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.444311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.444327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.444333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.452850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.452867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.452873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.461303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.461319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.461326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.470168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.470185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.470191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 27797.00 IOPS, 108.58 MiB/s [2024-10-01 15:55:15.479592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.479609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.479615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.489267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.489284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.489290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.498639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.498655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.498661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.507745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.507762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.507769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.515751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.515768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.515774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.525696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.525713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.525720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.533814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.533831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.542325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.542342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.542349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.550957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.550974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.550984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.180 [2024-10-01 15:55:15.560001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.180 [2024-10-01 15:55:15.560018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.180 [2024-10-01 15:55:15.560025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.569223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.569240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.569246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.578751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.578768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.578775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.586659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.586676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.586683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.595542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.595560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.595566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.605004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.605021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.605028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.613223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.613240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.613247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.622279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.622296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.622302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.181 [2024-10-01 15:55:15.631229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.181 [2024-10-01 15:55:15.631249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.181 [2024-10-01 15:55:15.631255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.441 [2024-10-01 15:55:15.640030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.441 [2024-10-01 15:55:15.640047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.441 [2024-10-01 15:55:15.640054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.441 [2024-10-01 15:55:15.648853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.441 [2024-10-01 15:55:15.648870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.441 [2024-10-01 15:55:15.648876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.441 [2024-10-01 15:55:15.658294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.441 [2024-10-01 15:55:15.658311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.441 [2024-10-01 15:55:15.658317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.441 [2024-10-01 15:55:15.666231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.441 [2024-10-01 15:55:15.666248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.441 [2024-10-01 15:55:15.666254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.441 [2024-10-01 15:55:15.676248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.441 [2024-10-01 15:55:15.676265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.441 [2024-10-01 15:55:15.676271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.441 [2024-10-01 15:55:15.684793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.684810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.684817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.693330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.693346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.693353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.703105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.703122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.703128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.711581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.711598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.711604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.720324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.720342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.720348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.729609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.729625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.729632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.738500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.738517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.738523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.747511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.747529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.747535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.755110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.755127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.755134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.764532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.764550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.764556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.773944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.773961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.773968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.783181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.783198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.783208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.792263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.792280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.792286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.799951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.799968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.799974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.810087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.810105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.810111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.818995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.819012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.819018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.827906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.827923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.827929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.837106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.837123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.837129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.846063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.846080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.846086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.855052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.855070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.855076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.862619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.862636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.862642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.872050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.872067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.872073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.880671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.880688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.880695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.442 [2024-10-01 15:55:15.889887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.442 [2024-10-01 15:55:15.889908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.442 [2024-10-01 15:55:15.889914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.703 [2024-10-01 15:55:15.898798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.703 [2024-10-01 15:55:15.898815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.703 [2024-10-01 15:55:15.898821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.703 [2024-10-01 15:55:15.908132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.703 [2024-10-01 15:55:15.908148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.703 [2024-10-01 15:55:15.908155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.703 [2024-10-01 15:55:15.917278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.703 [2024-10-01 15:55:15.917295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.703 [2024-10-01 15:55:15.917301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.703 [2024-10-01 15:55:15.925511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.703 [2024-10-01 15:55:15.925529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.703 [2024-10-01 15:55:15.925535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.703 [2024-10-01 15:55:15.934382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.703 [2024-10-01 15:55:15.934400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.703 [2024-10-01 15:55:15.934410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.703 [2024-10-01 15:55:15.943902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.703 [2024-10-01 15:55:15.943919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:15.943925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:15.952152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:15.952169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:15.952175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:15.960698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:15.960715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:15.960721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:15.969985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:15.970002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:15.970009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:15.980183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:15.980201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:15.980208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:15.988812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:15.988829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:15.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:15.997024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:15.997042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:15.997048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.006305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.006322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.006329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.015143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.015164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.015171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.025337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.025354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.025360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.034323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.034340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.034346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.043368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.043386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.043392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.051854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.051872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.051878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.060386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.060403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.060409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.069070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.069086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.069093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.078350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.078367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.078374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.087392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.087409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.087415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.096393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.096410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.096417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.104639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.104656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.104662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.114313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.114330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.114337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.124199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.124215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.124222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.131811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.131827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.704 [2024-10-01 15:55:16.131834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.704 [2024-10-01 15:55:16.140490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.704 [2024-10-01 15:55:16.140507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.705 [2024-10-01 15:55:16.140513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.705 [2024-10-01 15:55:16.150024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.705 [2024-10-01 15:55:16.150041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.705 [2024-10-01 15:55:16.150047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.159085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.159103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.159110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.167586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.167603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.167613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.177378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.177395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.177402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.186794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.186810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.186817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.194030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.194047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.194053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.203464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.203481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.203487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.212754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.212771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.212777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.222035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.222053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.222059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.230385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.230402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.230409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.238987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.239004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.239011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.247976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.247995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.248002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.256596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.256614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.256622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.265299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.265317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.265323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.274475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.274493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.274500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.285940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.285957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.285963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.295280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.295298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.295304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.304209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.304226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.304232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.315901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.315918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.315924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.324661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.324678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.324686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.333746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.333762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.333770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.345108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.345125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.345132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.354317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.354334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.354340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.363462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.363479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.363485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.372698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.372715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.372722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.381147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.381163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.381170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.390938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.390955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.390961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.399862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.399878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.399885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.408180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.408196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.408206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.965 [2024-10-01 15:55:16.416676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:36.965 [2024-10-01 15:55:16.416692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.965 [2024-10-01 15:55:16.416699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.225 [2024-10-01 15:55:16.426162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:37.225 [2024-10-01 15:55:16.426178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.226 [2024-10-01 15:55:16.426184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.226 [2024-10-01 15:55:16.434858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:37.226 [2024-10-01 15:55:16.434874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.226 [2024-10-01 15:55:16.434881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.226 [2024-10-01 15:55:16.444105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:37.226 [2024-10-01 15:55:16.444121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.226 [2024-10-01 15:55:16.444128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.226 [2024-10-01 15:55:16.451560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:37.226 [2024-10-01 15:55:16.451577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.226 [2024-10-01 15:55:16.451583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.226 [2024-10-01 15:55:16.461491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:37.226 [2024-10-01 15:55:16.461508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.226 [2024-10-01 15:55:16.461514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.226 [2024-10-01 15:55:16.469855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb849d0) 00:37:37.226 [2024-10-01 15:55:16.469872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.226 [2024-10-01 15:55:16.469878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.226 28091.00 IOPS, 109.73 MiB/s 00:37:37.226 Latency(us) 00:37:37.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.226 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:37.226 nvme0n1 : 2.00 28105.56 109.79 0.00 0.00 4549.74 2321.07 15510.19 00:37:37.226 =================================================================================================================== 00:37:37.226 Total : 28105.56 109.79 0.00 0.00 4549.74 2321.07 15510.19 00:37:37.226 { 00:37:37.226 "results": [ 00:37:37.226 { 00:37:37.226 "job": "nvme0n1", 00:37:37.226 "core_mask": "0x2", 00:37:37.226 "workload": "randread", 00:37:37.226 "status": "finished", 00:37:37.226 "queue_depth": 128, 00:37:37.226 "io_size": 4096, 00:37:37.226 "runtime": 2.003518, 00:37:37.226 "iops": 28105.562315886356, 00:37:37.226 "mibps": 109.78735279643108, 00:37:37.226 "io_failed": 0, 00:37:37.226 "io_timeout": 0, 00:37:37.226 "avg_latency_us": 4549.740685964601, 00:37:37.226 "min_latency_us": 2321.0666666666666, 00:37:37.226 "max_latency_us": 15510.186666666666 00:37:37.226 } 00:37:37.226 ], 00:37:37.226 "core_count": 1 00:37:37.226 } 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:37.226 | .driver_specific 00:37:37.226 | .nvme_error 00:37:37.226 | .status_code 00:37:37.226 | .command_transient_transport_error' 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3395918 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3395918 ']' 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3395918 00:37:37.226 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3395918 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3395918' 00:37:37.486 killing process with pid 3395918 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3395918 00:37:37.486 Received shutdown signal, test time was about 2.000000 seconds 00:37:37.486 00:37:37.486 Latency(us) 00:37:37.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.486 =================================================================================================================== 00:37:37.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3395918 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3396602 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3396602 /var/tmp/bperf.sock 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3396602 ']' 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:37.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:37.486 15:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.486 [2024-10-01 15:55:16.902757] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:37.486 [2024-10-01 15:55:16.902815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396602 ] 00:37:37.486 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:37.486 Zero copy mechanism will not be used. 00:37:37.486 [2024-10-01 15:55:16.932574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:37.747 [2024-10-01 15:55:16.978047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.747 [2024-10-01 15:55:17.006271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.318 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:38.318 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:38.318 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:38.318 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:38.578 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:38.578 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.578 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.578 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.578 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:38.579 15:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:38.839 nvme0n1 00:37:38.839 15:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:38.839 15:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.839 15:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.839 15:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.839 15:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:38.839 15:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:38.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:38.839 Zero copy mechanism will not be used. 00:37:38.839 Running I/O for 2 seconds... 00:37:38.839 [2024-10-01 15:55:18.193070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.193103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.193113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.205587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.205609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.205616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.217234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.217252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.217260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.227763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.227780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.227787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.238779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.238797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.238804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.250214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.250231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.250238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.260982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.260999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.261007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.271556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.271574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.271582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.839 [2024-10-01 15:55:18.282685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:38.839 [2024-10-01 15:55:18.282708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.839 [2024-10-01 15:55:18.282715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.295069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.295087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.295093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.307187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.307205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.307211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.319386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.319403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.319410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.332539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.332557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.332563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.342212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.342229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.342235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.353132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.353149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.353155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.364627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.364644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.364651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.376991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.377008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.388445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.388462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.388468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.401784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.401801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.401808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.413552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.413569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.413575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.425129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.425146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.425153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.438190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.438207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.438214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.447829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.447846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.447853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.458302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.458320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.458327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.470383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.470401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.470408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.482364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.482387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.482394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.494607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.494624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.494630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.101 [2024-10-01 15:55:18.507196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.101 [2024-10-01 15:55:18.507212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.101 [2024-10-01 15:55:18.507219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.102 [2024-10-01 15:55:18.519302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.102 [2024-10-01 15:55:18.519319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.102 [2024-10-01 15:55:18.519327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.102 [2024-10-01 15:55:18.531883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.102 [2024-10-01 15:55:18.531905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.102 [2024-10-01 15:55:18.531911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.102 [2024-10-01 15:55:18.543133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.102 [2024-10-01 15:55:18.543150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.102 [2024-10-01 15:55:18.543157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.362 [2024-10-01 15:55:18.556338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.362 [2024-10-01 15:55:18.556356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.362 [2024-10-01 15:55:18.556362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.565607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.565625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.565631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.578696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.578713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.578720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.590064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.590081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.590087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.599374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.599390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.599397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.610089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.610106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.610113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.620783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.620800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.620807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.632317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.632334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.632340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.642848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.642865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.642872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.651453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.651470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.651477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.661033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.661051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.661057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.672712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.672730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.672740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.684005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.684022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.684028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.695595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.695614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.695620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.708291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.708310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.708316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.720746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.720767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.720775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.732636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.732655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.732661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.743587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.743606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.743613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.754504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.754523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.754530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.765460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.765478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.765484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.776929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.776951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.776958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.789373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.789391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.789399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.801365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.801383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.801390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.363 [2024-10-01 15:55:18.813094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.363 [2024-10-01 15:55:18.813112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.363 [2024-10-01 15:55:18.813119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.625 [2024-10-01 15:55:18.825614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.625 [2024-10-01 15:55:18.825633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.625 [2024-10-01 15:55:18.825639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.625 [2024-10-01 15:55:18.837347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.625 [2024-10-01 15:55:18.837365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.625 [2024-10-01 15:55:18.837372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.625 [2024-10-01 15:55:18.850408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.625 [2024-10-01 15:55:18.850427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.625 [2024-10-01 15:55:18.850433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.625 [2024-10-01 15:55:18.863025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.625 [2024-10-01 15:55:18.863044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.625 [2024-10-01 15:55:18.863050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.625 [2024-10-01 15:55:18.873878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.625 [2024-10-01 15:55:18.873902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.625 [2024-10-01 15:55:18.873909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.625 [2024-10-01 15:55:18.885106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.625 [2024-10-01 15:55:18.885124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.625 [2024-10-01 15:55:18.885131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.625 [2024-10-01 15:55:18.892145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.625 [2024-10-01 15:55:18.892164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.892170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.902263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.902281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.902287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.909153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.909171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.909177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.920080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.920098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.920105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.931765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.931783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.931789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.942264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.942282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.942288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.952689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.952707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.952714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.962859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.962877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.962887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.972506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.972525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.972532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.980582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.980600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.980607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.987415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.987434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.987441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:18.999337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:18.999356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:18.999362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:19.010597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:19.010616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:19.010622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:19.023041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:19.023060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:19.023066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:19.035931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:19.035950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:19.035956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:19.048061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:19.048079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:19.048085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:19.056865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:19.056884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:19.056891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:19.067278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:19.067296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:19.067303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.626 [2024-10-01 15:55:19.078655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.626 [2024-10-01 15:55:19.078674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.626 [2024-10-01 15:55:19.078680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.090470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.090488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.090495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.099365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.099383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.099389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.103934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.103952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.103958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.114578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.114597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.114604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.125670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.125688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.125695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.129993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.130010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.130020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.139554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.139573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.139579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.143839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.143858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.143864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.148928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.148947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.148953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.157739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.157757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.157763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.163934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.163953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.163959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.887 2850.00 IOPS, 356.25 MiB/s [2024-10-01 15:55:19.175470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.175489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.175495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.887 [2024-10-01 15:55:19.184019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.887 [2024-10-01 15:55:19.184038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.887 [2024-10-01 15:55:19.184044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.189793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.189813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.189819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.194376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.194398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.194404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.199131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.199149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.199156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.203785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.203804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.203811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.211435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.211453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.211459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.219255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.219275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.219281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.229680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.229700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.229708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.237392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.237411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.237419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.244616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.244635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.244642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.250911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.250929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.250936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.256167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.256185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.256192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.261996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.262015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.262021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.273598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.273617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.273624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.284359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.284377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.284384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.291839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.291858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.291864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.302347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.302366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.302372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.313110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.313128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.313135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.320971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.320989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.320996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.325977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.325995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.326005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.333343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.333361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.333367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.888 [2024-10-01 15:55:19.337938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:39.888 [2024-10-01 15:55:19.337960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.888 [2024-10-01 15:55:19.337967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.348619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.348638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.348644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.355684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.355702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.355708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.361597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.361615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.361621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.370325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.370344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.370351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.379980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.379998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.380004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.387762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.387780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.387786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.395220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.395237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.395243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.401371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.401389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.401395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.403983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.404000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.404006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.410345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.410363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.410369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.415857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.415875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.415881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.420719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.420736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.420743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.426453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.426471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.426478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.432813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.432831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.432837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.440302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.440320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.440330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.446911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.446929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.446935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.453680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.453698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.453705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.462885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.462910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.462917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.472626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.472645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.472651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.483523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.483541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.483548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.491775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.491793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.491799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.496134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.496152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.496158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.504126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.504144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.504151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.511072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.511094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.511100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.520964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.520983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.520989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.529477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.529495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.529502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.541254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.541273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.541279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.553127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.553146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.553152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.565449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.565467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.565474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.571123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.571141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.571148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.577157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.577176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.577182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.585408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.585427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.585434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.591166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.591185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.591191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.149 [2024-10-01 15:55:19.598051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.149 [2024-10-01 15:55:19.598069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.149 [2024-10-01 15:55:19.598076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.605813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.605831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.605837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.613937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.613955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.613961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.623093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.623111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.623118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.630561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.630580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.630586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.635670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.635688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.635695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.642799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.642818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.642824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.653073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.653091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.653104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.660768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.660787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.660793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.668796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.668814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.668820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.676371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.676389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.676396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.681179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.681197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.681203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.685919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.685937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.685944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.695745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.695762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.695769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.701942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.701960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.701966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.710564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.710581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.710588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.714998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.715021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.715027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.721612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.721631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.721637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.732339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.732357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.732364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.742331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.742352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.742360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.749310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.749328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.749335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.759383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.759402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.759408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.764132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.764150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.764157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.770445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.412 [2024-10-01 15:55:19.770464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.412 [2024-10-01 15:55:19.770470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.412 [2024-10-01 15:55:19.775035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.775052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.775059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.782655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.782674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.782680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.787969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.787987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.787994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.799067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.799086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.799092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.808980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.808998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.809005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.817510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.817529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.817535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.821880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.821904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.821911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.831961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.831979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.831985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.837599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.837618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.837624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.849278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.849298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.849307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.413 [2024-10-01 15:55:19.857637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.413 [2024-10-01 15:55:19.857655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.413 [2024-10-01 15:55:19.857661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.866674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.866693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.866699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.871741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.871760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.871766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.879703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.879721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.879727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.887859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.887878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.887884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.894879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.894902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.894908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.901443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.901462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.901469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.912963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.912981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.912988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.921130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.921150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.921156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.930173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.930192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.930198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.934027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.934045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.934052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.942954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.942973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.942980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.948820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.948838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.948844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.954557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.954575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.954581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.958899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.958917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.958923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.961193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.961210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.676 [2024-10-01 15:55:19.961217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.676 [2024-10-01 15:55:19.971529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.676 [2024-10-01 15:55:19.971546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:19.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:19.978408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:19.978427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:19.978433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:19.983233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:19.983251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:19.983257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:19.992685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:19.992704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:19.992711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.002230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.002251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.002259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.009273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.009292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.009298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.013944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.013963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.013970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.018137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.018157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.018166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.028508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.028528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.028535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.036274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.036296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.036302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.042538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.042557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.042563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.047795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.047815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.047822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.055773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.055793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.055800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.065443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.065464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.065471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.071651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.071670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.071676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.077262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.077281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.077287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.085522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.085541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.085548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.094249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.094268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.094274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.099661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.099684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.099693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.104464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.104484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.104491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.108891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.108924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.108931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.116779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.116798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.116805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.121538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.121556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.121563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.677 [2024-10-01 15:55:20.125869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.677 [2024-10-01 15:55:20.125887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.677 [2024-10-01 15:55:20.125898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.130541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.130559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.130566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.135543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.135561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.135568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.144225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.144243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.144254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.147811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.147830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.147836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.152073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.152092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.152098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.161533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.161552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.161558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.169619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.169637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.169644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.939 [2024-10-01 15:55:20.175030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e475f0) 00:37:40.939 [2024-10-01 15:55:20.175048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.939 [2024-10-01 15:55:20.175054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.939 3553.50 IOPS, 444.19 MiB/s 00:37:40.939 Latency(us) 00:37:40.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.939 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:40.939 nvme0n1 : 2.01 3552.49 444.06 0.00 0.00 4500.77 361.81 15400.96 00:37:40.939 =================================================================================================================== 00:37:40.939 Total : 3552.49 444.06 0.00 0.00 4500.77 361.81 15400.96 00:37:40.939 { 00:37:40.939 "results": [ 00:37:40.939 { 00:37:40.939 "job": "nvme0n1", 00:37:40.939 "core_mask": "0x2", 00:37:40.939 "workload": "randread", 00:37:40.939 "status": "finished", 00:37:40.939 "queue_depth": 16, 00:37:40.939 "io_size": 131072, 00:37:40.939 "runtime": 2.005072, 00:37:40.939 "iops": 3552.4908831204066, 00:37:40.939 "mibps": 444.0613603900508, 00:37:40.939 "io_failed": 0, 00:37:40.939 "io_timeout": 0, 00:37:40.939 "avg_latency_us": 4500.771712293509, 00:37:40.939 "min_latency_us": 361.81333333333333, 00:37:40.939 "max_latency_us": 15400.96 00:37:40.939 } 00:37:40.939 ], 00:37:40.939 "core_count": 1 00:37:40.939 } 00:37:40.939 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:40.939 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:40.939 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:40.939 | .driver_specific 00:37:40.939 | .nvme_error 00:37:40.939 | .status_code 00:37:40.939 | .command_transient_transport_error' 00:37:40.939 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 229 > 0 )) 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3396602 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3396602 ']' 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3396602 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3396602 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3396602' 00:37:41.200 killing process with pid 3396602 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3396602 00:37:41.200 Received shutdown signal, test time was about 2.000000 seconds 00:37:41.200 00:37:41.200 Latency(us) 00:37:41.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.200 =================================================================================================================== 00:37:41.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3396602 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3397285 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3397285 /var/tmp/bperf.sock 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3397285 ']' 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:41.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:41.200 15:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.200 [2024-10-01 15:55:20.631313] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:41.200 [2024-10-01 15:55:20.631371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397285 ] 00:37:41.462 [2024-10-01 15:55:20.661744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:41.462 [2024-10-01 15:55:20.707704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.462 [2024-10-01 15:55:20.734201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.034 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:42.034 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:42.034 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:42.034 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:42.294 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:42.294 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.294 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:42.294 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.294 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:42.294 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:42.554 nvme0n1 00:37:42.554 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:42.554 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.554 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:42.554 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.554 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:42.554 15:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:42.554 Running I/O for 2 seconds... 00:37:42.554 [2024-10-01 15:55:21.983560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e95a0 00:37:42.554 [2024-10-01 15:55:21.984446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.554 [2024-10-01 15:55:21.984474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.554 [2024-10-01 15:55:21.992074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e84c0 00:37:42.554 [2024-10-01 15:55:21.992932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.554 [2024-10-01 15:55:21.992950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.554 [2024-10-01 15:55:22.000556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e73e0 00:37:42.554 [2024-10-01 15:55:22.001385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.554 [2024-10-01 15:55:22.001402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.814 [2024-10-01 15:55:22.009040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e3060 00:37:42.814 [2024-10-01 15:55:22.009918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.814 [2024-10-01 15:55:22.009934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.814 [2024-10-01 15:55:22.017519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e4140 00:37:42.814 [2024-10-01 15:55:22.018398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.814 [2024-10-01 15:55:22.018415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.814 [2024-10-01 15:55:22.026003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5220 00:37:42.814 [2024-10-01 15:55:22.026878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.814 [2024-10-01 15:55:22.026896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.814 [2024-10-01 15:55:22.034449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e6300 00:37:42.814 [2024-10-01 15:55:22.035279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.814 [2024-10-01 15:55:22.035296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.814 [2024-10-01 15:55:22.042910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f31b8 00:37:42.814 [2024-10-01 15:55:22.043764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.814 [2024-10-01 15:55:22.043780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.814 [2024-10-01 15:55:22.051368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f20d8 00:37:42.814 [2024-10-01 15:55:22.052229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.814 [2024-10-01 15:55:22.052245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.059810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0ff8 00:37:42.815 [2024-10-01 15:55:22.060629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.060645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.068251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eff18 00:37:42.815 [2024-10-01 15:55:22.069109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.069125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.076707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eee38 00:37:42.815 [2024-10-01 15:55:22.077569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.077585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.085141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198edd58 00:37:42.815 [2024-10-01 15:55:22.085963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.085979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.093567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ecc78 00:37:42.815 [2024-10-01 15:55:22.094427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.094443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.101990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ebb98 00:37:42.815 [2024-10-01 15:55:22.102860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.102876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.110405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eaab8 00:37:42.815 [2024-10-01 15:55:22.111229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.111244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.118823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e99d8 00:37:42.815 [2024-10-01 15:55:22.119661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.119676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.127290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e88f8 00:37:42.815 [2024-10-01 15:55:22.128134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.128150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.135733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:42.815 [2024-10-01 15:55:22.136607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.136623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.144302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e2c28 00:37:42.815 [2024-10-01 15:55:22.145162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.145184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.152733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e3d08 00:37:42.815 [2024-10-01 15:55:22.153594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.153610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.161167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e4de8 00:37:42.815 [2024-10-01 15:55:22.162026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.162042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.169623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5ec8 00:37:42.815 [2024-10-01 15:55:22.170483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.170500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.178060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e6fa8 00:37:42.815 [2024-10-01 15:55:22.178928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.178943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.186484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f2d80 00:37:42.815 [2024-10-01 15:55:22.187360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.187376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.194924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f1ca0 00:37:42.815 [2024-10-01 15:55:22.195783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.195799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.203344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0bc0 00:37:42.815 [2024-10-01 15:55:22.204174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.204190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.211788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198efae0 00:37:42.815 [2024-10-01 15:55:22.212646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.212662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.220224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eea00 00:37:42.815 [2024-10-01 15:55:22.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.221120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.228663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ed920 00:37:42.815 [2024-10-01 15:55:22.229521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.229537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.237090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ec840 00:37:42.815 [2024-10-01 15:55:22.237940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.237958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.245515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eb760 00:37:42.815 [2024-10-01 15:55:22.246384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.246402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.253965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ea680 00:37:42.815 [2024-10-01 15:55:22.254819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.254835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.815 [2024-10-01 15:55:22.262406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e95a0 00:37:42.815 [2024-10-01 15:55:22.263244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.815 [2024-10-01 15:55:22.263260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.270861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e84c0 00:37:43.077 [2024-10-01 15:55:22.271730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.271745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.279309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e73e0 00:37:43.077 [2024-10-01 15:55:22.280139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.280155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.287725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e3060 00:37:43.077 [2024-10-01 15:55:22.288581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.288596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.296168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e4140 00:37:43.077 [2024-10-01 15:55:22.297035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.297050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.304596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5220 00:37:43.077 [2024-10-01 15:55:22.305469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.305485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.313042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e6300 00:37:43.077 [2024-10-01 15:55:22.313889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.313908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.321464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f31b8 00:37:43.077 [2024-10-01 15:55:22.322320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.322336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.329897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f20d8 00:37:43.077 [2024-10-01 15:55:22.330758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.330774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.338319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0ff8 00:37:43.077 [2024-10-01 15:55:22.339142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.339158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.346743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eff18 00:37:43.077 [2024-10-01 15:55:22.347605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.347620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.355182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eee38 00:37:43.077 [2024-10-01 15:55:22.356059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.356074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.363623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198edd58 00:37:43.077 [2024-10-01 15:55:22.364483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.364501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.372045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ecc78 00:37:43.077 [2024-10-01 15:55:22.372918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.372934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.380466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ebb98 00:37:43.077 [2024-10-01 15:55:22.381283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.381299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.388877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eaab8 00:37:43.077 [2024-10-01 15:55:22.389755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.389770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.397332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e99d8 00:37:43.077 [2024-10-01 15:55:22.398192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.398208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.405768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e88f8 00:37:43.077 [2024-10-01 15:55:22.406640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.077 [2024-10-01 15:55:22.406655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.077 [2024-10-01 15:55:22.414251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.077 [2024-10-01 15:55:22.415095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.415111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.422676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e2c28 00:37:43.078 [2024-10-01 15:55:22.423554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.423570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.431094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e3d08 00:37:43.078 [2024-10-01 15:55:22.431964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.431980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.439511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e4de8 00:37:43.078 [2024-10-01 15:55:22.440351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.440367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.447960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5ec8 00:37:43.078 [2024-10-01 15:55:22.448830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.448845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.456411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e6fa8 00:37:43.078 [2024-10-01 15:55:22.457282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.457298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.464835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f2d80 00:37:43.078 [2024-10-01 15:55:22.465697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.465713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.473248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f1ca0 00:37:43.078 [2024-10-01 15:55:22.474119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.474135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.481674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0bc0 00:37:43.078 [2024-10-01 15:55:22.482536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.482551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.490293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198efae0 00:37:43.078 [2024-10-01 15:55:22.491161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.491176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.498739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eea00 00:37:43.078 [2024-10-01 15:55:22.499596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.499613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.507174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ed920 00:37:43.078 [2024-10-01 15:55:22.508025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.508041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.515596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ec840 00:37:43.078 [2024-10-01 15:55:22.516452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.516468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.078 [2024-10-01 15:55:22.524015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eb760 00:37:43.078 [2024-10-01 15:55:22.524862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.078 [2024-10-01 15:55:22.524877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.532450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ea680 00:37:43.339 [2024-10-01 15:55:22.533303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.533319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.540883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e95a0 00:37:43.339 [2024-10-01 15:55:22.541760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.541776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.549341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e84c0 00:37:43.339 [2024-10-01 15:55:22.550181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.550197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.557767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e73e0 00:37:43.339 [2024-10-01 15:55:22.558624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.558640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.566178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e3060 00:37:43.339 [2024-10-01 15:55:22.566989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.567004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.574612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e4140 00:37:43.339 [2024-10-01 15:55:22.575490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.575506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.583053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5220 00:37:43.339 [2024-10-01 15:55:22.583912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.583930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.591491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e6300 00:37:43.339 [2024-10-01 15:55:22.592349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.592365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.599921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f31b8 00:37:43.339 [2024-10-01 15:55:22.600786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.600802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.608359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f20d8 00:37:43.339 [2024-10-01 15:55:22.609220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.609235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.616800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0ff8 00:37:43.339 [2024-10-01 15:55:22.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.617677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.625238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eff18 00:37:43.339 [2024-10-01 15:55:22.626062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.626077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.633667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eee38 00:37:43.339 [2024-10-01 15:55:22.634578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.642105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198edd58 00:37:43.339 [2024-10-01 15:55:22.642973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.642989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.650534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ecc78 00:37:43.339 [2024-10-01 15:55:22.651363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.339 [2024-10-01 15:55:22.651378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.339 [2024-10-01 15:55:22.658959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ebb98 00:37:43.339 [2024-10-01 15:55:22.659821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.659836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.667401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eaab8 00:37:43.340 [2024-10-01 15:55:22.668257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.668272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.675835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e99d8 00:37:43.340 [2024-10-01 15:55:22.676694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.676710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.684265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e88f8 00:37:43.340 [2024-10-01 15:55:22.685137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.685153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.692700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.340 [2024-10-01 15:55:22.693574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.693590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.701119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e2c28 00:37:43.340 [2024-10-01 15:55:22.701978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.701993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.709528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e3d08 00:37:43.340 [2024-10-01 15:55:22.710389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.710405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.717970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e4de8 00:37:43.340 [2024-10-01 15:55:22.718822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.718838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.726400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5ec8 00:37:43.340 [2024-10-01 15:55:22.727252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.727268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.734847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e6fa8 00:37:43.340 [2024-10-01 15:55:22.735701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.735717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.743264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f2d80 00:37:43.340 [2024-10-01 15:55:22.744132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.744147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.751692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f1ca0 00:37:43.340 [2024-10-01 15:55:22.752575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.752591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.760146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0bc0 00:37:43.340 [2024-10-01 15:55:22.761001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.761017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.768598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198efae0 00:37:43.340 [2024-10-01 15:55:22.769468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.769485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.777039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eea00 00:37:43.340 [2024-10-01 15:55:22.777909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.777924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.340 [2024-10-01 15:55:22.785467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ed920 00:37:43.340 [2024-10-01 15:55:22.786346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.340 [2024-10-01 15:55:22.786363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.601 [2024-10-01 15:55:22.793939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ec840 00:37:43.601 [2024-10-01 15:55:22.794811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.601 [2024-10-01 15:55:22.794826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.601 [2024-10-01 15:55:22.802369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eb760 00:37:43.601 [2024-10-01 15:55:22.803225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.601 [2024-10-01 15:55:22.803244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.601 [2024-10-01 15:55:22.810816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ea680 00:37:43.601 [2024-10-01 15:55:22.811652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.601 [2024-10-01 15:55:22.811669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.601 [2024-10-01 15:55:22.819264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e95a0 00:37:43.601 [2024-10-01 15:55:22.820095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.601 [2024-10-01 15:55:22.820111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.601 [2024-10-01 15:55:22.827713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e84c0 00:37:43.602 [2024-10-01 15:55:22.828587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.828603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.836140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e73e0 00:37:43.602 [2024-10-01 15:55:22.836956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.836973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.844582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e3060 00:37:43.602 [2024-10-01 15:55:22.845416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.845431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.853039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e4140 00:37:43.602 [2024-10-01 15:55:22.853890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.853908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.861486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5220 00:37:43.602 [2024-10-01 15:55:22.862362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.862378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.869952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e6300 00:37:43.602 [2024-10-01 15:55:22.870824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.870839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.878397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f31b8 00:37:43.602 [2024-10-01 15:55:22.879265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.879281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.886840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f20d8 00:37:43.602 [2024-10-01 15:55:22.887694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.887710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.895273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0ff8 00:37:43.602 [2024-10-01 15:55:22.896163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.896178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.903726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eff18 00:37:43.602 [2024-10-01 15:55:22.904594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.904610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.912180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eee38 00:37:43.602 [2024-10-01 15:55:22.913043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.913058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.920622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198edd58 00:37:43.602 [2024-10-01 15:55:22.921484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.921500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.929088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ecc78 00:37:43.602 [2024-10-01 15:55:22.929954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.929970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.937513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ebb98 00:37:43.602 [2024-10-01 15:55:22.938377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.938393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.945966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eaab8 00:37:43.602 [2024-10-01 15:55:22.946836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.946852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.954454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e99d8 00:37:43.602 [2024-10-01 15:55:22.955294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.955310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.962901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e88f8 00:37:43.602 [2024-10-01 15:55:22.963771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.963787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 29991.00 IOPS, 117.15 MiB/s [2024-10-01 15:55:22.971308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:22.972033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.972048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.979750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:22.980598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.980614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.988173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:22.989041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.989057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:22.996621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:22.997481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:22.997496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:23.005060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:23.005930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:23.005947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:23.013497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:23.014365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:23.014382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:23.021945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:23.022793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:23.022812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:23.030386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:23.031235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:23.031251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:23.038818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:23.039669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:23.039686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.602 [2024-10-01 15:55:23.047254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.602 [2024-10-01 15:55:23.048120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.602 [2024-10-01 15:55:23.048136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.055713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.056583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.056599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.064158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.064998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.065014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.072560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.073408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.073425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.081007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.081852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.081868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.089442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.090311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.090327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.097906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.098755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.098773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.106355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.107192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.107208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.114829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.115685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.115701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.123263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.124103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.124119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.131700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.132524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.132540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.140137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.140974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.140990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.148653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.149526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.149542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.157092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.157953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.157969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.165514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.864 [2024-10-01 15:55:23.166335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.864 [2024-10-01 15:55:23.166350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.864 [2024-10-01 15:55:23.173949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.174809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.174825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.182395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.183260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.183276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.190828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.191697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.191713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.199269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.200099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.200115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.207693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.208546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.208561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.216123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.216956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.216972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.224560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.225425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.225440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.233019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.233837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.233853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.241447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.242299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.242315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.249876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.250752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.258299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.259174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.259191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.266759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.267613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.267629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.275231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.276089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.276105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.283679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.284532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.284548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.292109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.292983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.292999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.300538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.301385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.301401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.308967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:43.865 [2024-10-01 15:55:23.309831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.865 [2024-10-01 15:55:23.309847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:43.865 [2024-10-01 15:55:23.317413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.318268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.318287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.325853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.326662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.326677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.334300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.335186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.335202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.342725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.343576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.343592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.351186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.352039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.352055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.359626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.360498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.360515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.368085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.368943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.368959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.376558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.377428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.377444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.384979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.385804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.385820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.393394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.394263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.394279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.401858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.402696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.402712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.410302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.411139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.411155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.418747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.419623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.419639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.427197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e7818 00:37:44.126 [2024-10-01 15:55:23.428007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.428023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.435921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198df988 00:37:44.126 [2024-10-01 15:55:23.436876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.436891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.444504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e0a68 00:37:44.126 [2024-10-01 15:55:23.445466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.445482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.452960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e1b48 00:37:44.126 [2024-10-01 15:55:23.453920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.453936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.461392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f6cc8 00:37:44.126 [2024-10-01 15:55:23.462349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.462365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.469838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f5be8 00:37:44.126 [2024-10-01 15:55:23.470820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.470836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.478267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f4b08 00:37:44.126 [2024-10-01 15:55:23.479237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.126 [2024-10-01 15:55:23.479253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.126 [2024-10-01 15:55:23.486697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f3a28 00:37:44.126 [2024-10-01 15:55:23.487662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.487679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.495291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0788 00:37:44.127 [2024-10-01 15:55:23.496272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.496287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.503744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ef6a8 00:37:44.127 [2024-10-01 15:55:23.504715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.504730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.512184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ee5c8 00:37:44.127 [2024-10-01 15:55:23.513147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.513163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.520610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ed4e8 00:37:44.127 [2024-10-01 15:55:23.521580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.521596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.529036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ec408 00:37:44.127 [2024-10-01 15:55:23.529983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.529999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.537444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fef90 00:37:44.127 [2024-10-01 15:55:23.538423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.538442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.545869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fdeb0 00:37:44.127 [2024-10-01 15:55:23.546855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.546871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.554309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fcdd0 00:37:44.127 [2024-10-01 15:55:23.555291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.555306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.562738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ddc00 00:37:44.127 [2024-10-01 15:55:23.563697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.563713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.571171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198dece0 00:37:44.127 [2024-10-01 15:55:23.572104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.127 [2024-10-01 15:55:23.572119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.127 [2024-10-01 15:55:23.579601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198dfdc0 00:37:44.388 [2024-10-01 15:55:23.580572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.580588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.588038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e0ea0 00:37:44.388 [2024-10-01 15:55:23.588994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.589010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.596472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e1f80 00:37:44.388 [2024-10-01 15:55:23.597436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.597452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.604900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f6890 00:37:44.388 [2024-10-01 15:55:23.605862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.605878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.613307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f57b0 00:37:44.388 [2024-10-01 15:55:23.614275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.614291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.621718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f46d0 00:37:44.388 [2024-10-01 15:55:23.622691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.622706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.630140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f1868 00:37:44.388 [2024-10-01 15:55:23.631124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.631140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.638577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0bc0 00:37:44.388 [2024-10-01 15:55:23.639538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.639553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.647027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198efae0 00:37:44.388 [2024-10-01 15:55:23.647955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.647971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.655469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198eea00 00:37:44.388 [2024-10-01 15:55:23.656432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.656448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.663931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ed920 00:37:44.388 [2024-10-01 15:55:23.664897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.664913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.672340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ff3c8 00:37:44.388 [2024-10-01 15:55:23.673324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.673339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.680764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fda78 00:37:44.388 [2024-10-01 15:55:23.681747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.681763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.689205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fd208 00:37:44.388 [2024-10-01 15:55:23.690193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.690209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.697721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc128 00:37:44.388 [2024-10-01 15:55:23.698690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.698706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.706139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198de8a8 00:37:44.388 [2024-10-01 15:55:23.707097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.388 [2024-10-01 15:55:23.707113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.388 [2024-10-01 15:55:23.714573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198df988 00:37:44.388 [2024-10-01 15:55:23.715545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.715560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.722995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e0a68 00:37:44.389 [2024-10-01 15:55:23.723978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.723993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.731463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e1b48 00:37:44.389 [2024-10-01 15:55:23.732445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.732461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.739908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f6cc8 00:37:44.389 [2024-10-01 15:55:23.740865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.740880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.748332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f5be8 00:37:44.389 [2024-10-01 15:55:23.749299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.749315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.756752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f4b08 00:37:44.389 [2024-10-01 15:55:23.757725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.757742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.765184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f3a28 00:37:44.389 [2024-10-01 15:55:23.766164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.766181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.773607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198f0788 00:37:44.389 [2024-10-01 15:55:23.774552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.774568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.782035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198ef6a8 00:37:44.389 [2024-10-01 15:55:23.782958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.782973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.790699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198e5ec8 00:37:44.389 [2024-10-01 15:55:23.791553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.791569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.798608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.389 [2024-10-01 15:55:23.798957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.798973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.807272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.389 [2024-10-01 15:55:23.807512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.807535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.815976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.389 [2024-10-01 15:55:23.816196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.816212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.824728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.389 [2024-10-01 15:55:23.824954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.824969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.389 [2024-10-01 15:55:23.833493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.389 [2024-10-01 15:55:23.833735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.389 [2024-10-01 15:55:23.833753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.842202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.842447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.842462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.850888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.851133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.851148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.859605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.859855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.859878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.868291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.868535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.868550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.876987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.877219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.877235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.885688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.885922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.885937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.894422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.894663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.894678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.903111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.903331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.903345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.911829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.912033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.912048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.920532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.920731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.920746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.929208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.929434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.929448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.937903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.938142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.938157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.946563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.946815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.946829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.955296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.955536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.955551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.964016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.964222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.964237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 [2024-10-01 15:55:23.972654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c2d0) with pdu=0x2000198fc998 00:37:44.649 [2024-10-01 15:55:23.973196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:44.649 [2024-10-01 15:55:23.973212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:44.649 30070.50 IOPS, 117.46 MiB/s 00:37:44.649 Latency(us) 00:37:44.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.649 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.649 nvme0n1 : 2.01 30067.74 117.45 0.00 0.00 4249.84 2225.49 13762.56 00:37:44.649 =================================================================================================================== 00:37:44.649 Total : 30067.74 117.45 0.00 0.00 4249.84 2225.49 13762.56 00:37:44.649 { 00:37:44.649 "results": [ 00:37:44.649 { 00:37:44.649 "job": "nvme0n1", 00:37:44.649 "core_mask": "0x2", 00:37:44.649 "workload": "randwrite", 00:37:44.649 "status": "finished", 00:37:44.649 "queue_depth": 128, 00:37:44.649 "io_size": 4096, 00:37:44.649 "runtime": 2.005738, 00:37:44.649 "iops": 30067.735666373177, 00:37:44.649 "mibps": 117.45209244677022, 00:37:44.649 "io_failed": 0, 00:37:44.649 "io_timeout": 0, 00:37:44.649 "avg_latency_us": 4249.838618646504, 00:37:44.649 "min_latency_us": 2225.4933333333333, 00:37:44.649 "max_latency_us": 13762.56 00:37:44.649 } 00:37:44.649 ], 00:37:44.649 "core_count": 1 00:37:44.649 } 00:37:44.649 15:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:44.649 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:44.649 | .driver_specific 00:37:44.649 | .nvme_error 00:37:44.649 | .status_code 00:37:44.649 | .command_transient_transport_error' 00:37:44.649 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:44.649 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3397285 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3397285 ']' 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3397285 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397285 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397285' 00:37:44.910 killing process with pid 3397285 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3397285 00:37:44.910 Received shutdown signal, test time was about 2.000000 seconds 00:37:44.910 00:37:44.910 Latency(us) 00:37:44.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.910 =================================================================================================================== 00:37:44.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:44.910 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3397285 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3397977 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3397977 /var/tmp/bperf.sock 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3397977 ']' 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:45.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:45.171 15:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.171 [2024-10-01 15:55:24.418802] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:45.171 [2024-10-01 15:55:24.418860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397977 ] 00:37:45.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:45.171 Zero copy mechanism will not be used. 00:37:45.171 [2024-10-01 15:55:24.448857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:45.171 [2024-10-01 15:55:24.496649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.171 [2024-10-01 15:55:24.525054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:46.112 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:46.372 nvme0n1 00:37:46.372 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:46.372 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.372 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:46.372 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.372 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:46.372 15:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:46.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:46.632 Zero copy mechanism will not be used. 00:37:46.632 Running I/O for 2 seconds... 00:37:46.632 [2024-10-01 15:55:25.853527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.632 [2024-10-01 15:55:25.853739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.632 [2024-10-01 15:55:25.853767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.632 [2024-10-01 15:55:25.861030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.632 [2024-10-01 15:55:25.861240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.632 [2024-10-01 15:55:25.861259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.632 [2024-10-01 15:55:25.867082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.867265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.867282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.874931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.875200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.875218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.880260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.880439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.880455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.888879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.889237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.889255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.898564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.898821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.898838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.908740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.908996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.909014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.915526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.915831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.915849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.924487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.924553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.924569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.933711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.933752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.933768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.941477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.941783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.941798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.950869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.950945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.950961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.958286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.958336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.958351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.964835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.964898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.964914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.970987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.971065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.971080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.981061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.981131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.981150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.990505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.990776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.990792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:25.997716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:25.997784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:25.997798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.007137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.007401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.007416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.016241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.016316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.016332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.023741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.023787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.023804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.031200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.031284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.031300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.037740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.037832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.037847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.045801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.046084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.046101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.057073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.057322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.057338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.067809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.068063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.068080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.633 [2024-10-01 15:55:26.078925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.633 [2024-10-01 15:55:26.079153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.633 [2024-10-01 15:55:26.079170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.894 [2024-10-01 15:55:26.090545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.894 [2024-10-01 15:55:26.090871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.894 [2024-10-01 15:55:26.090887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.894 [2024-10-01 15:55:26.102391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.894 [2024-10-01 15:55:26.102682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.894 [2024-10-01 15:55:26.102698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.894 [2024-10-01 15:55:26.114117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.894 [2024-10-01 15:55:26.114420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.894 [2024-10-01 15:55:26.114435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.894 [2024-10-01 15:55:26.125560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.894 [2024-10-01 15:55:26.125931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.894 [2024-10-01 15:55:26.125948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.136511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.136793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.136810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.147855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.148070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.148086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.158752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.158822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.158837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.169069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.169404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.169420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.179572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.179808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.179823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.190743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.191043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.191059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.202283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.202539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.202554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.213597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.213820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.213836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.224793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.225042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.225057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.235508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.235732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.235747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.244534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.244603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.244622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.253779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.253946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.253961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.263644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.263697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.263712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.272349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.272518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.272533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.281247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.281489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.281504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.286290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.286358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.286374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.293440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.293482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.293498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.301770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.301841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.301856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.309398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.309541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.309556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.317878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.317938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.317953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.326161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.326219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.326234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.335008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.335088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.335103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.895 [2024-10-01 15:55:26.344986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:46.895 [2024-10-01 15:55:26.345317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.895 [2024-10-01 15:55:26.345333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.353393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.353475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.363630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.363689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.363704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.371457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.371745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.371760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.380972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.381033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.381050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.390374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.390524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.390539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.398110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.398158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.398173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.406000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.406230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.406247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.413367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.413602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.413617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.421922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.422241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.422258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.431406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.431465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.431481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.438867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.438924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.438939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.447045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.447110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.447126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.452550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.452593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.452608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.457702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.457787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.457805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.466280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.466354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.466369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.471363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.471426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.471441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.481062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.481355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.481370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.489367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.489626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.489641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.498984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.499258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.499274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.510603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.510909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.510925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.521826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.522156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.522172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.533263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.533332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.533346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.543540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.543829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.543845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.554606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.554916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.554932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.565901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.566170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.566186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.577568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.577860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.577876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.585779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.585834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.585849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.596215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.596277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.596292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.601823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.601897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.601913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.157 [2024-10-01 15:55:26.609557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.157 [2024-10-01 15:55:26.609602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.157 [2024-10-01 15:55:26.609618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.616883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.617210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.617226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.623162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.623218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.623233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.633073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.633302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.633318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.641940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.642000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.642015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.647702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.647758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.647773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.655138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.655207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.655223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.665220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.665263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.665278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.673285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.673347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.673362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.680859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.680906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.680922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.686904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.686963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.686981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.695539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.695731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.695746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.702822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.702864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.702880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.712175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.712407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.712423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.722748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.722817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.722832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.733941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.734015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.734030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.743715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.743784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.743799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.752091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.752270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.752286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.761288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.761341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.761356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.770192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.770312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.770326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.777522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.777567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.777582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.785770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.785818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.785832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.793437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.793496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.793511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.802870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.802957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.419 [2024-10-01 15:55:26.810429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.419 [2024-10-01 15:55:26.810476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.419 [2024-10-01 15:55:26.810491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.420 [2024-10-01 15:55:26.819263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.420 [2024-10-01 15:55:26.819311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.420 [2024-10-01 15:55:26.819327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.420 [2024-10-01 15:55:26.827247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.420 [2024-10-01 15:55:26.827328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.420 [2024-10-01 15:55:26.827343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.420 [2024-10-01 15:55:26.835991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.420 [2024-10-01 15:55:26.836035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.420 [2024-10-01 15:55:26.836053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.420 [2024-10-01 15:55:26.840141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.420 [2024-10-01 15:55:26.840222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.420 [2024-10-01 15:55:26.840238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.420 3504.00 IOPS, 438.00 MiB/s [2024-10-01 15:55:26.849149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.420 [2024-10-01 15:55:26.849406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.420 [2024-10-01 15:55:26.849422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.420 [2024-10-01 15:55:26.857476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.420 [2024-10-01 15:55:26.857552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.420 [2024-10-01 15:55:26.857567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.420 [2024-10-01 15:55:26.865730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.420 [2024-10-01 15:55:26.865868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.420 [2024-10-01 15:55:26.865884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.874274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.874341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.874356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.882330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.882556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.882571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.889798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.889905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.889921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.898209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.898283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.898298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.903237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.903310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.903326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.909040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.909260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.909276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.913130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.913184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.913200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.916029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.916085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.916101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.918911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.918983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.918999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.921663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.921718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.921733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.924473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.924527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.924543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.927337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.927393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.927408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.931679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.931937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.931952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.682 [2024-10-01 15:55:26.939389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.682 [2024-10-01 15:55:26.939672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.682 [2024-10-01 15:55:26.939689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.945047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.945117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.945132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.952716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.953019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.953036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.958152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.958206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.958222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.963101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.963187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.963203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.971077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.971141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.971157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.979225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.979520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.979537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.987299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.987362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.987377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:26.995709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:26.995883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:26.995907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.005206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.005471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.005486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.011989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.012148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.012164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.021012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.021076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.021092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.028651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.028711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.028727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.035231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.035307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.035323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.040692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.040757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.040773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.045003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.045314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.045331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.050400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.050467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.050483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.053401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.053459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.053475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.057951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.058002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.058017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.061178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.061229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.061245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.064199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.064257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.067826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.068117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.068133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.073332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.073410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.073425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.078546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.078752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.078767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.083271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.083335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.083350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.090493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.090539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.090555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.094691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.094746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.094761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.683 [2024-10-01 15:55:27.103135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.683 [2024-10-01 15:55:27.103394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.683 [2024-10-01 15:55:27.103409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.684 [2024-10-01 15:55:27.111990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.684 [2024-10-01 15:55:27.112264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.684 [2024-10-01 15:55:27.112280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.684 [2024-10-01 15:55:27.122026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.684 [2024-10-01 15:55:27.122244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.684 [2024-10-01 15:55:27.122259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.684 [2024-10-01 15:55:27.132457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.684 [2024-10-01 15:55:27.132543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.684 [2024-10-01 15:55:27.132559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.945 [2024-10-01 15:55:27.142811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.945 [2024-10-01 15:55:27.143121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.945 [2024-10-01 15:55:27.143139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.945 [2024-10-01 15:55:27.153469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.945 [2024-10-01 15:55:27.153697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.945 [2024-10-01 15:55:27.153712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.164125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.164412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.164428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.174242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.174436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.174454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.184801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.184986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.185002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.195313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.195583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.195599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.206011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.206251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.206266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.216702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.216929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.216945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.225031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.225095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.225110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.232687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.232785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.232800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.240490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.240551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.240567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.244448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.244528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.244543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.248367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.248416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.248431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.252098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.252142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.252158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.255616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.255659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.255674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.259313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.259360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.259375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.264529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.264577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.264593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.268604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.268647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.268662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.273470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.273522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.273537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.279037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.279296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.279312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.287302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.287348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.287363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.292703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.292777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.292792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.298637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.298683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.298698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.302112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.302155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.302171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.306117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.306161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.306177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.311703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.311747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.311762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.315447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.946 [2024-10-01 15:55:27.315521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.946 [2024-10-01 15:55:27.315536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.946 [2024-10-01 15:55:27.319351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.319393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.319408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.323970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.324013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.324029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.327482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.327530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.327549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.330615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.330659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.330674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.333651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.333694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.333710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.336650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.336693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.336708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.339514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.339558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.339574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.342362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.342406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.342422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.345530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.345572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.345588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.349852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.349928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.349944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.353505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.353566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.353581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.360413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.360486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.360501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.363327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.363419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.363435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.367643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.367716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.367731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.375219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.375450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.375465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.379462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.379524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.379540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.383083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.383159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.383174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.390433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.390493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.390510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.947 [2024-10-01 15:55:27.397094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:47.947 [2024-10-01 15:55:27.397342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.947 [2024-10-01 15:55:27.397358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.406908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.407197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.407213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.416768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.417043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.417059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.426150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.426454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.426470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.433917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.433976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.433992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.441044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.441326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.441347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.447186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.447491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.447508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.453703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.453746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.453761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.459851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.459902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.459918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.465431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.465476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.465491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.469684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.469726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.469747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.475669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.475943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.475959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.484663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.484729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.484744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.491719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.491789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.491805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.496719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.496784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.496800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.505932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.506164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.506179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.515631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.515919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.515934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.525289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.525562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.525578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.531292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.531352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.531368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.535808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.535918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.535933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.542411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.542640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.542655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.552621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.552675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.552691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.562856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.563130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.563146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.572254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.209 [2024-10-01 15:55:27.572517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.209 [2024-10-01 15:55:27.572532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.209 [2024-10-01 15:55:27.581981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.582268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.582283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.591758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.592009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.592024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.601840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.602127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.602143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.612820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.613047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.613063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.618838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.618908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.618924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.623679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.623745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.623761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.627518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.627565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.627580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.631296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.631361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.631376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.635183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.635228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.635243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.641320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.641366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.641384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.645232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.645279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.645295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.649149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.649201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.649217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.652851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.652921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.657031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.657078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.657093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.210 [2024-10-01 15:55:27.661988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.210 [2024-10-01 15:55:27.662078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.210 [2024-10-01 15:55:27.662093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.667097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.667140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.667155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.670599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.670660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.670675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.674297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.674342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.674357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.677768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.677819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.677834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.681944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.682003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.682018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.686493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.686539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.686553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.689781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.689835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.689850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.692716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.692773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.692789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.695706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.695752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.695768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.698973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.699016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.699032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.702321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.702366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.702381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.705670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.705727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.705742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.708818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.708863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.708879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.712445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.712489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.712505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.717227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.717270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.717285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.722252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.722311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.722327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.725766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.725816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.725832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.729410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.729455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.729470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.472 [2024-10-01 15:55:27.733312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.472 [2024-10-01 15:55:27.733356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.472 [2024-10-01 15:55:27.733372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.737195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.737241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.737257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.740887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.740937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.740952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.744062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.744105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.744121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.747568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.747613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.747629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.750718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.750762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.750780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.753685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.753729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.753744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.757031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.757077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.757092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.761560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.761605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.761620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.767137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.767204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.767219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.771110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.771175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.771190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.779965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.780270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.780285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.789095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.789266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.789282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.797979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.798238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.798254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.807167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.807443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.807459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.817352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.817659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.817675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.822181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.822224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.825940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.825998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.826013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.829828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.829881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.829902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.835463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.835517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.835532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.473 [2024-10-01 15:55:27.841790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe9c610) with pdu=0x2000198fef90 00:37:48.473 [2024-10-01 15:55:27.841834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.473 [2024-10-01 15:55:27.841850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.473 4391.50 IOPS, 548.94 MiB/s 00:37:48.473 Latency(us) 00:37:48.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.473 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:48.473 nvme0n1 : 2.00 4394.54 549.32 0.00 0.00 3636.83 1276.59 13216.43 00:37:48.473 =================================================================================================================== 00:37:48.473 Total : 4394.54 549.32 0.00 0.00 3636.83 1276.59 13216.43 00:37:48.473 { 00:37:48.473 "results": [ 00:37:48.473 { 00:37:48.473 "job": "nvme0n1", 00:37:48.473 "core_mask": "0x2", 00:37:48.473 "workload": "randwrite", 00:37:48.473 "status": "finished", 00:37:48.473 "queue_depth": 16, 00:37:48.473 "io_size": 131072, 00:37:48.473 "runtime": 2.002939, 00:37:48.473 "iops": 4394.542220207405, 00:37:48.473 "mibps": 549.3177775259256, 00:37:48.473 "io_failed": 0, 00:37:48.473 "io_timeout": 0, 00:37:48.473 "avg_latency_us": 3636.834925395743, 00:37:48.473 "min_latency_us": 1276.5866666666666, 00:37:48.473 "max_latency_us": 13216.426666666666 00:37:48.473 } 00:37:48.473 ], 00:37:48.473 "core_count": 1 00:37:48.473 } 00:37:48.473 15:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:48.473 15:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:48.473 15:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:48.473 15:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:48.473 | .driver_specific 00:37:48.473 | .nvme_error 00:37:48.473 | .status_code 00:37:48.473 | .command_transient_transport_error' 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 283 > 0 )) 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3397977 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3397977 ']' 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3397977 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397977 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397977' 00:37:48.734 killing process with pid 3397977 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3397977 00:37:48.734 Received shutdown signal, test time was about 2.000000 seconds 00:37:48.734 00:37:48.734 Latency(us) 00:37:48.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.734 =================================================================================================================== 00:37:48.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:48.734 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3397977 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3395603 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3395603 ']' 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3395603 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3395603 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3395603' 00:37:48.994 killing process with pid 3395603 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3395603 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3395603 00:37:48.994 00:37:48.994 real 0m16.337s 00:37:48.994 user 0m32.464s 00:37:48.994 sys 0m3.503s 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:48.994 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:48.994 ************************************ 00:37:48.994 END TEST nvmf_digest_error 00:37:48.994 ************************************ 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:49.255 rmmod nvme_tcp 00:37:49.255 rmmod nvme_fabrics 00:37:49.255 rmmod nvme_keyring 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 3395603 ']' 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 3395603 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3395603 ']' 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3395603 00:37:49.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3395603) - No such process 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3395603 is not found' 00:37:49.255 Process with pid 3395603 is not found 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:49.255 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:37:49.256 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:49.256 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:49.256 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.256 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.256 15:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.167 15:55:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.428 00:37:51.428 real 0m42.360s 00:37:51.428 user 1m6.181s 00:37:51.428 sys 0m12.876s 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:51.428 ************************************ 00:37:51.428 END TEST nvmf_digest 00:37:51.428 ************************************ 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.428 ************************************ 00:37:51.428 START TEST nvmf_bdevperf 00:37:51.428 ************************************ 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:51.428 * Looking for test storage... 00:37:51.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:37:51.428 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:51.689 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.690 --rc genhtml_branch_coverage=1 00:37:51.690 --rc genhtml_function_coverage=1 00:37:51.690 --rc genhtml_legend=1 00:37:51.690 --rc geninfo_all_blocks=1 00:37:51.690 --rc geninfo_unexecuted_blocks=1 00:37:51.690 00:37:51.690 ' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.690 --rc genhtml_branch_coverage=1 00:37:51.690 --rc genhtml_function_coverage=1 00:37:51.690 --rc genhtml_legend=1 00:37:51.690 --rc geninfo_all_blocks=1 00:37:51.690 --rc geninfo_unexecuted_blocks=1 00:37:51.690 00:37:51.690 ' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.690 --rc genhtml_branch_coverage=1 00:37:51.690 --rc genhtml_function_coverage=1 00:37:51.690 --rc genhtml_legend=1 00:37:51.690 --rc geninfo_all_blocks=1 00:37:51.690 --rc geninfo_unexecuted_blocks=1 00:37:51.690 00:37:51.690 ' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.690 --rc genhtml_branch_coverage=1 00:37:51.690 --rc genhtml_function_coverage=1 00:37:51.690 --rc genhtml_legend=1 00:37:51.690 --rc geninfo_all_blocks=1 00:37:51.690 --rc geninfo_unexecuted_blocks=1 00:37:51.690 00:37:51.690 ' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.690 15:55:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:59.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:59.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.965 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:59.966 Found net devices under 0000:31:00.0: cvl_0_0 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:59.966 Found net devices under 0000:31:00.1: cvl_0_1 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:59.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:59.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:37:59.966 00:37:59.966 --- 10.0.0.2 ping statistics --- 00:37:59.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.966 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:59.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:59.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:37:59.966 00:37:59.966 --- 10.0.0.1 ping statistics --- 00:37:59.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.966 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3403060 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3403060 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3403060 ']' 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:59.966 15:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.966 [2024-10-01 15:55:38.469861] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:37:59.966 [2024-10-01 15:55:38.469924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:59.966 [2024-10-01 15:55:38.508518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:59.966 [2024-10-01 15:55:38.558166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:59.966 [2024-10-01 15:55:38.602859] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.966 [2024-10-01 15:55:38.602923] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.966 [2024-10-01 15:55:38.602932] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.966 [2024-10-01 15:55:38.602939] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.966 [2024-10-01 15:55:38.602946] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.966 [2024-10-01 15:55:38.603057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.966 [2024-10-01 15:55:38.603212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.966 [2024-10-01 15:55:38.603214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.966 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:59.966 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:59.966 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:59.966 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.967 [2024-10-01 15:55:39.338596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.967 Malloc0 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.967 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.967 [2024-10-01 15:55:39.412853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.249 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.249 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:00.249 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:00.249 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:38:00.249 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:38:00.249 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:00.249 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:00.249 { 00:38:00.249 "params": { 00:38:00.249 "name": "Nvme$subsystem", 00:38:00.249 "trtype": "$TEST_TRANSPORT", 00:38:00.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:00.249 "adrfam": "ipv4", 00:38:00.249 "trsvcid": "$NVMF_PORT", 00:38:00.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:00.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:00.249 "hdgst": ${hdgst:-false}, 00:38:00.249 "ddgst": ${ddgst:-false} 00:38:00.249 }, 00:38:00.249 "method": "bdev_nvme_attach_controller" 00:38:00.250 } 00:38:00.250 EOF 00:38:00.250 )") 00:38:00.250 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:38:00.250 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:38:00.250 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:38:00.250 15:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:00.250 "params": { 00:38:00.250 "name": "Nvme1", 00:38:00.250 "trtype": "tcp", 00:38:00.250 "traddr": "10.0.0.2", 00:38:00.250 "adrfam": "ipv4", 00:38:00.250 "trsvcid": "4420", 00:38:00.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:00.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:00.250 "hdgst": false, 00:38:00.250 "ddgst": false 00:38:00.250 }, 00:38:00.250 "method": "bdev_nvme_attach_controller" 00:38:00.250 }' 00:38:00.250 [2024-10-01 15:55:39.471722] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:38:00.250 [2024-10-01 15:55:39.471788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403160 ] 00:38:00.250 [2024-10-01 15:55:39.505944] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:00.250 [2024-10-01 15:55:39.553103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.250 [2024-10-01 15:55:39.587298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.510 Running I/O for 1 seconds... 00:38:01.453 8882.00 IOPS, 34.70 MiB/s 00:38:01.453 Latency(us) 00:38:01.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.453 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:01.453 Verification LBA range: start 0x0 length 0x4000 00:38:01.453 Nvme1n1 : 1.02 8966.33 35.02 0.00 0.00 14217.98 3194.88 15073.28 00:38:01.453 =================================================================================================================== 00:38:01.453 Total : 8966.33 35.02 0.00 0.00 14217.98 3194.88 15073.28 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3403438 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:01.714 { 00:38:01.714 "params": { 00:38:01.714 "name": "Nvme$subsystem", 00:38:01.714 "trtype": "$TEST_TRANSPORT", 00:38:01.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.714 "adrfam": "ipv4", 00:38:01.714 "trsvcid": "$NVMF_PORT", 00:38:01.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.714 "hdgst": ${hdgst:-false}, 00:38:01.714 "ddgst": ${ddgst:-false} 00:38:01.714 }, 00:38:01.714 "method": "bdev_nvme_attach_controller" 00:38:01.714 } 00:38:01.714 EOF 00:38:01.714 )") 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:38:01.714 15:55:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:01.714 "params": { 00:38:01.714 "name": "Nvme1", 00:38:01.714 "trtype": "tcp", 00:38:01.714 "traddr": "10.0.0.2", 00:38:01.714 "adrfam": "ipv4", 00:38:01.714 "trsvcid": "4420", 00:38:01.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:01.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:01.714 "hdgst": false, 00:38:01.714 "ddgst": false 00:38:01.714 }, 00:38:01.714 "method": "bdev_nvme_attach_controller" 00:38:01.714 }' 00:38:01.714 [2024-10-01 15:55:40.959248] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:38:01.714 [2024-10-01 15:55:40.959303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403438 ] 00:38:01.714 [2024-10-01 15:55:40.989677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:01.714 [2024-10-01 15:55:41.036683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.714 [2024-10-01 15:55:41.066767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.974 Running I/O for 15 seconds... 00:38:04.804 10276.00 IOPS, 40.14 MiB/s 10710.00 IOPS, 41.84 MiB/s 15:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3403060 00:38:04.804 15:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:04.804 [2024-10-01 15:55:43.924032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:04.804 [2024-10-01 15:55:43.924075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.804 [2024-10-01 15:55:43.924295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.804 [2024-10-01 15:55:43.924306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.805 [2024-10-01 15:55:43.924976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.805 [2024-10-01 15:55:43.924986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.924993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.806 [2024-10-01 15:55:43.925668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.806 [2024-10-01 15:55:43.925677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.925991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.925998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:04.807 [2024-10-01 15:55:43.926307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f557c0 is same with the state(6) to be set 00:38:04.807 [2024-10-01 15:55:43.926325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:04.807 [2024-10-01 15:55:43.926331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:04.807 [2024-10-01 15:55:43.926338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:38:04.807 [2024-10-01 15:55:43.926347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:04.807 [2024-10-01 15:55:43.926385] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f557c0 was disconnected and freed. reset controller. 00:38:04.807 [2024-10-01 15:55:43.929865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:43.929921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:43.930592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:43.930609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:43.930618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:43.930835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:43.931056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:43.931065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:43.931074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:43.934561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:43.944016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:43.944576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:43.944592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:43.944601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:43.944816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:43.945039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:43.945048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:43.945056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:43.948541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:43.957809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:43.958459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:43.958499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:43.958510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:43.958748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:43.958977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:43.958987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:43.958995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:43.962485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:43.971535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:43.972189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:43.972227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:43.972239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:43.972479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:43.972699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:43.972707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:43.972715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:43.976224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:43.985277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:43.985922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:43.985962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:43.985973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:43.986216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:43.986436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:43.986445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:43.986453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:43.989956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:43.999204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:43.999795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:43.999814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:43.999823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:44.000047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:44.000263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:44.000271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:44.000279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:44.003764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:44.013006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:44.013567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:44.013584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:44.013592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:44.013807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:44.014030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:44.014039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:44.014050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:44.017536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:44.026775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:44.027315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:44.027356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:44.027367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:44.027604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:44.027824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:44.027833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:44.027840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:44.031342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:44.040591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:44.041193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:44.041235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:44.041247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:44.041484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:44.041704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:44.041713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:44.041721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:44.045224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:44.054483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:44.055120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:44.055163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.808 [2024-10-01 15:55:44.055175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.808 [2024-10-01 15:55:44.055413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.808 [2024-10-01 15:55:44.055632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.808 [2024-10-01 15:55:44.055641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.808 [2024-10-01 15:55:44.055649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.808 [2024-10-01 15:55:44.059156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.808 [2024-10-01 15:55:44.068420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.808 [2024-10-01 15:55:44.069115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.808 [2024-10-01 15:55:44.069158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.069168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.069407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.069627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.069636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.069644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.073150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.082198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.809 [2024-10-01 15:55:44.082834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.809 [2024-10-01 15:55:44.082878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.082891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.083140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.083360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.083369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.083377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.086870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.096123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.809 [2024-10-01 15:55:44.096785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.809 [2024-10-01 15:55:44.096831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.096843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.097095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.097316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.097325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.097332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.100826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.109878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.809 [2024-10-01 15:55:44.110557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.809 [2024-10-01 15:55:44.110605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.110616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.110862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.111096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.111105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.111113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.114611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.123661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.809 [2024-10-01 15:55:44.124306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.809 [2024-10-01 15:55:44.124355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.124366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.124609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.124829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.124839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.124846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.128357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.137408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.809 [2024-10-01 15:55:44.138104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.809 [2024-10-01 15:55:44.138155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.138167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.138410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.138632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.138641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.138650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.142166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.151226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.809 [2024-10-01 15:55:44.151939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.809 [2024-10-01 15:55:44.151994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.152007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.152255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.152477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.152487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.152506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.156048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.165123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.809 [2024-10-01 15:55:44.165827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.809 [2024-10-01 15:55:44.165892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.809 [2024-10-01 15:55:44.165919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.809 [2024-10-01 15:55:44.166171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.809 [2024-10-01 15:55:44.166394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.809 [2024-10-01 15:55:44.166403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.809 [2024-10-01 15:55:44.166412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.809 [2024-10-01 15:55:44.169921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.809 [2024-10-01 15:55:44.178980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.810 [2024-10-01 15:55:44.179659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.810 [2024-10-01 15:55:44.179722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.810 [2024-10-01 15:55:44.179736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.810 [2024-10-01 15:55:44.180005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.810 [2024-10-01 15:55:44.180229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.810 [2024-10-01 15:55:44.180240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.810 [2024-10-01 15:55:44.180249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.810 [2024-10-01 15:55:44.183769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.810 [2024-10-01 15:55:44.192862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.810 [2024-10-01 15:55:44.193503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.810 [2024-10-01 15:55:44.193532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.810 [2024-10-01 15:55:44.193542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.810 [2024-10-01 15:55:44.193763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.810 [2024-10-01 15:55:44.193992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.810 [2024-10-01 15:55:44.194003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.810 [2024-10-01 15:55:44.194011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.810 [2024-10-01 15:55:44.197526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.810 [2024-10-01 15:55:44.206609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.810 [2024-10-01 15:55:44.207164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.810 [2024-10-01 15:55:44.207194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.810 [2024-10-01 15:55:44.207202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.810 [2024-10-01 15:55:44.207420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.810 [2024-10-01 15:55:44.207637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.810 [2024-10-01 15:55:44.207647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.810 [2024-10-01 15:55:44.207654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.810 [2024-10-01 15:55:44.211167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.810 [2024-10-01 15:55:44.220452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.810 [2024-10-01 15:55:44.221105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.810 [2024-10-01 15:55:44.221157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.810 [2024-10-01 15:55:44.221169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.810 [2024-10-01 15:55:44.221413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.810 [2024-10-01 15:55:44.221634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.810 [2024-10-01 15:55:44.221643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.810 [2024-10-01 15:55:44.221651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.810 [2024-10-01 15:55:44.225172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.810 [2024-10-01 15:55:44.234254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.810 [2024-10-01 15:55:44.234948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.810 [2024-10-01 15:55:44.235003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.810 [2024-10-01 15:55:44.235015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.810 [2024-10-01 15:55:44.235262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.810 [2024-10-01 15:55:44.235483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.810 [2024-10-01 15:55:44.235494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.810 [2024-10-01 15:55:44.235502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.810 [2024-10-01 15:55:44.239020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.810 [2024-10-01 15:55:44.248089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.810 [2024-10-01 15:55:44.248647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.810 [2024-10-01 15:55:44.248686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:04.810 [2024-10-01 15:55:44.248698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:04.810 [2024-10-01 15:55:44.248944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:04.810 [2024-10-01 15:55:44.249168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.810 [2024-10-01 15:55:44.249178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.810 [2024-10-01 15:55:44.249185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.810 [2024-10-01 15:55:44.252681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.072 9651.00 IOPS, 37.70 MiB/s [2024-10-01 15:55:44.263627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.072 [2024-10-01 15:55:44.264258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.072 [2024-10-01 15:55:44.264298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.072 [2024-10-01 15:55:44.264308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.072 [2024-10-01 15:55:44.264545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.072 [2024-10-01 15:55:44.264764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.072 [2024-10-01 15:55:44.264772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.072 [2024-10-01 15:55:44.264781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.072 [2024-10-01 15:55:44.268293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.072 [2024-10-01 15:55:44.277556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.072 [2024-10-01 15:55:44.278236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.072 [2024-10-01 15:55:44.278277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.072 [2024-10-01 15:55:44.278288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.072 [2024-10-01 15:55:44.278525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.072 [2024-10-01 15:55:44.278744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.072 [2024-10-01 15:55:44.278753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.072 [2024-10-01 15:55:44.278760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.072 [2024-10-01 15:55:44.282261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.072 [2024-10-01 15:55:44.291337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.072 [2024-10-01 15:55:44.294909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.072 [2024-10-01 15:55:44.294946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.072 [2024-10-01 15:55:44.294958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.072 [2024-10-01 15:55:44.295196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.072 [2024-10-01 15:55:44.295415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.072 [2024-10-01 15:55:44.295424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.072 [2024-10-01 15:55:44.295431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.072 [2024-10-01 15:55:44.298943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.072 [2024-10-01 15:55:44.305122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.072 [2024-10-01 15:55:44.305664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.072 [2024-10-01 15:55:44.305704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.305717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.305966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.306187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.306195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.306203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.309698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.318974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.319648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.319688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.319699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.319945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.320165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.320174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.320182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.323679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.332749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.333348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.333369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.333377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.333593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.333809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.333818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.333825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.337322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.346587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.347141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.347158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.347171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.347387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.347603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.347611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.347619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.351117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.360396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.361002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.361044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.361056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.361296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.361515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.361524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.361532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.365044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.374301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.374885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.374913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.374922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.375138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.375354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.375363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.375370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.378862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.388114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.388741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.388784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.388796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.389044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.389265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.389288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.389296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.392792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.401844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.402395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.402438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.402449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.402689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.402921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.402931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.402939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.406439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.415699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.416357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.416401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.416413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.416651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.416871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.416880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.416888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.420395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.429446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.430013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.430056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.430069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.430308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.073 [2024-10-01 15:55:44.430528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.073 [2024-10-01 15:55:44.430538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.073 [2024-10-01 15:55:44.430546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.073 [2024-10-01 15:55:44.434052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.073 [2024-10-01 15:55:44.443316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.073 [2024-10-01 15:55:44.443878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.073 [2024-10-01 15:55:44.443927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.073 [2024-10-01 15:55:44.443938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.073 [2024-10-01 15:55:44.444175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.074 [2024-10-01 15:55:44.444394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.074 [2024-10-01 15:55:44.444403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.074 [2024-10-01 15:55:44.444410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.074 [2024-10-01 15:55:44.447906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.074 [2024-10-01 15:55:44.457171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.074 [2024-10-01 15:55:44.457733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.074 [2024-10-01 15:55:44.457753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.074 [2024-10-01 15:55:44.457761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.074 [2024-10-01 15:55:44.457989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.074 [2024-10-01 15:55:44.458207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.074 [2024-10-01 15:55:44.458215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.074 [2024-10-01 15:55:44.458223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.074 [2024-10-01 15:55:44.461710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.074 [2024-10-01 15:55:44.470965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.074 [2024-10-01 15:55:44.471534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.074 [2024-10-01 15:55:44.471551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.074 [2024-10-01 15:55:44.471558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.074 [2024-10-01 15:55:44.471774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.074 [2024-10-01 15:55:44.471996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.074 [2024-10-01 15:55:44.472006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.074 [2024-10-01 15:55:44.472013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.074 [2024-10-01 15:55:44.475500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.074 [2024-10-01 15:55:44.484743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.074 [2024-10-01 15:55:44.485401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.074 [2024-10-01 15:55:44.485440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.074 [2024-10-01 15:55:44.485455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.074 [2024-10-01 15:55:44.485690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.074 [2024-10-01 15:55:44.485919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.074 [2024-10-01 15:55:44.485929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.074 [2024-10-01 15:55:44.485936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.074 [2024-10-01 15:55:44.489632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.074 [2024-10-01 15:55:44.498484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.074 [2024-10-01 15:55:44.499158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.074 [2024-10-01 15:55:44.499196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.074 [2024-10-01 15:55:44.499207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.074 [2024-10-01 15:55:44.499442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.074 [2024-10-01 15:55:44.499661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.074 [2024-10-01 15:55:44.499670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.074 [2024-10-01 15:55:44.499677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.074 [2024-10-01 15:55:44.503174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.074 [2024-10-01 15:55:44.512222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.074 [2024-10-01 15:55:44.512774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.074 [2024-10-01 15:55:44.512813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.074 [2024-10-01 15:55:44.512825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.074 [2024-10-01 15:55:44.513069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.074 [2024-10-01 15:55:44.513290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.074 [2024-10-01 15:55:44.513300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.074 [2024-10-01 15:55:44.513308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.074 [2024-10-01 15:55:44.516796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.336 [2024-10-01 15:55:44.526050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.336 [2024-10-01 15:55:44.526721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.336 [2024-10-01 15:55:44.526759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.336 [2024-10-01 15:55:44.526770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.336 [2024-10-01 15:55:44.527013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.336 [2024-10-01 15:55:44.527233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.336 [2024-10-01 15:55:44.527242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.336 [2024-10-01 15:55:44.527255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.336 [2024-10-01 15:55:44.530746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.336 [2024-10-01 15:55:44.539794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.336 [2024-10-01 15:55:44.540426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.336 [2024-10-01 15:55:44.540465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.336 [2024-10-01 15:55:44.540476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.336 [2024-10-01 15:55:44.540712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.336 [2024-10-01 15:55:44.540939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.336 [2024-10-01 15:55:44.540948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.336 [2024-10-01 15:55:44.540956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.336 [2024-10-01 15:55:44.544450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.336 [2024-10-01 15:55:44.553703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.336 [2024-10-01 15:55:44.554335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.336 [2024-10-01 15:55:44.554374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.336 [2024-10-01 15:55:44.554385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.336 [2024-10-01 15:55:44.554620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.336 [2024-10-01 15:55:44.554839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.336 [2024-10-01 15:55:44.554848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.336 [2024-10-01 15:55:44.554855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.336 [2024-10-01 15:55:44.558362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.336 [2024-10-01 15:55:44.567628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.336 [2024-10-01 15:55:44.568266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.336 [2024-10-01 15:55:44.568305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.336 [2024-10-01 15:55:44.568316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.336 [2024-10-01 15:55:44.568551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.336 [2024-10-01 15:55:44.568770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.336 [2024-10-01 15:55:44.568779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.336 [2024-10-01 15:55:44.568786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.336 [2024-10-01 15:55:44.572285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.336 [2024-10-01 15:55:44.581535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.336 [2024-10-01 15:55:44.582194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.336 [2024-10-01 15:55:44.582233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.336 [2024-10-01 15:55:44.582244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.336 [2024-10-01 15:55:44.582479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.336 [2024-10-01 15:55:44.582697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.336 [2024-10-01 15:55:44.582706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.336 [2024-10-01 15:55:44.582713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.336 [2024-10-01 15:55:44.586209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.336 [2024-10-01 15:55:44.595458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.336 [2024-10-01 15:55:44.596186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.336 [2024-10-01 15:55:44.596224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.336 [2024-10-01 15:55:44.596235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.596470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.596689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.596697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.596705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.600203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.609246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.609808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.609847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.609859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.610106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.610326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.610334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.610342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.613830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.623081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.623661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.623680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.623688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.623915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.624132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.624142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.624153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.627677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.636958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.637622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.637662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.637677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.637950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.638184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.638196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.638208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.641730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.650805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.651494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.651534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.651549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.651813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.652055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.652068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.652079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.655598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.664682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.665283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.665305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.665317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.665561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.665791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.665802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.665819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.669342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.678619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.679267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.679307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.679322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.679587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.679820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.679832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.679843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.683379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.692466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.693469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.693494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.693506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.693760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.693996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.694008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.694020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.697543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.706205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.706905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.706944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.706959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.707223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.707457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.707469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.707480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.711003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.720076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.720628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.720654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.720666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.720917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.721147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.721158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.337 [2024-10-01 15:55:44.721169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.337 [2024-10-01 15:55:44.724684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.337 [2024-10-01 15:55:44.733958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.337 [2024-10-01 15:55:44.734632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.337 [2024-10-01 15:55:44.734671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.337 [2024-10-01 15:55:44.734686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.337 [2024-10-01 15:55:44.734958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.337 [2024-10-01 15:55:44.735192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.337 [2024-10-01 15:55:44.735203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.338 [2024-10-01 15:55:44.735215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.338 [2024-10-01 15:55:44.738733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.338 [2024-10-01 15:55:44.746659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.338 [2024-10-01 15:55:44.747137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.338 [2024-10-01 15:55:44.747153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.338 [2024-10-01 15:55:44.747161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.338 [2024-10-01 15:55:44.747329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.338 [2024-10-01 15:55:44.747488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.338 [2024-10-01 15:55:44.747495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.338 [2024-10-01 15:55:44.747503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.338 [2024-10-01 15:55:44.749917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.338 [2024-10-01 15:55:44.759250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.338 [2024-10-01 15:55:44.759761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.338 [2024-10-01 15:55:44.759776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.338 [2024-10-01 15:55:44.759784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.338 [2024-10-01 15:55:44.759956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.338 [2024-10-01 15:55:44.760118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.338 [2024-10-01 15:55:44.760126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.338 [2024-10-01 15:55:44.760133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.338 [2024-10-01 15:55:44.762547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.338 [2024-10-01 15:55:44.771874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.338 [2024-10-01 15:55:44.772365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.338 [2024-10-01 15:55:44.772380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.338 [2024-10-01 15:55:44.772388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.338 [2024-10-01 15:55:44.772555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.338 [2024-10-01 15:55:44.772713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.338 [2024-10-01 15:55:44.772720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.338 [2024-10-01 15:55:44.772728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.338 [2024-10-01 15:55:44.775146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.338 [2024-10-01 15:55:44.784465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.338 [2024-10-01 15:55:44.784882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.338 [2024-10-01 15:55:44.784919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.338 [2024-10-01 15:55:44.784930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.338 [2024-10-01 15:55:44.785120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.338 [2024-10-01 15:55:44.785282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.338 [2024-10-01 15:55:44.785290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.338 [2024-10-01 15:55:44.785298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.338 [2024-10-01 15:55:44.787715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.600 [2024-10-01 15:55:44.797044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.600 [2024-10-01 15:55:44.797485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.600 [2024-10-01 15:55:44.797515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.600 [2024-10-01 15:55:44.797527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.600 [2024-10-01 15:55:44.797711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.600 [2024-10-01 15:55:44.797872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.600 [2024-10-01 15:55:44.797880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.600 [2024-10-01 15:55:44.797888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.600 [2024-10-01 15:55:44.800316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.600 [2024-10-01 15:55:44.809642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.600 [2024-10-01 15:55:44.810222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.600 [2024-10-01 15:55:44.810254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.600 [2024-10-01 15:55:44.810265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.600 [2024-10-01 15:55:44.810449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.600 [2024-10-01 15:55:44.810609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.600 [2024-10-01 15:55:44.810617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.600 [2024-10-01 15:55:44.810625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.600 [2024-10-01 15:55:44.813048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.600 [2024-10-01 15:55:44.822233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.600 [2024-10-01 15:55:44.822675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.600 [2024-10-01 15:55:44.822707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.600 [2024-10-01 15:55:44.822718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.600 [2024-10-01 15:55:44.822912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.600 [2024-10-01 15:55:44.823075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.600 [2024-10-01 15:55:44.823083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.600 [2024-10-01 15:55:44.823091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.600 [2024-10-01 15:55:44.825507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.600 [2024-10-01 15:55:44.834835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.600 [2024-10-01 15:55:44.835445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.600 [2024-10-01 15:55:44.835477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.600 [2024-10-01 15:55:44.835488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.600 [2024-10-01 15:55:44.835676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.600 [2024-10-01 15:55:44.835836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.600 [2024-10-01 15:55:44.835844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.600 [2024-10-01 15:55:44.835852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.600 [2024-10-01 15:55:44.838276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.600 [2024-10-01 15:55:44.847465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.600 [2024-10-01 15:55:44.847970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.600 [2024-10-01 15:55:44.848001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.600 [2024-10-01 15:55:44.848017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.600 [2024-10-01 15:55:44.848205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.600 [2024-10-01 15:55:44.848366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.600 [2024-10-01 15:55:44.848373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.600 [2024-10-01 15:55:44.848381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.600 [2024-10-01 15:55:44.850803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.600 [2024-10-01 15:55:44.860141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.600 [2024-10-01 15:55:44.860766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.600 [2024-10-01 15:55:44.860796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.600 [2024-10-01 15:55:44.860807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.600 [2024-10-01 15:55:44.860998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.861161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.861169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.861177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.863594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.872784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.873281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.873312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.873323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.873508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.873669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.873677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.873686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.876110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.885439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.885795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.885813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.885821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.885994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.886152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.886163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.886171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.888593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.898057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.898587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.898619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.898630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.898814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.898980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.898989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.898997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.901415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.910739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.911218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.911234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.911243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.911410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.911568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.911575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.911583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.914001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.923320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.923817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.923832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.923840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.924014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.924172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.924179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.924187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.926638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.935983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.936463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.936495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.936506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.936694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.936857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.936866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.936874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.939297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.948619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.949256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.949288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.949299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.949484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.949645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.949653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.949663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.952087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.961240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.961754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.961772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.961781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.961956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.962115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.962123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.962131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.964547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.973873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.974382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.974398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.974406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.974576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.974736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.974743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.974751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.601 [2024-10-01 15:55:44.977170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.601 [2024-10-01 15:55:44.986487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.601 [2024-10-01 15:55:44.987181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.601 [2024-10-01 15:55:44.987212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.601 [2024-10-01 15:55:44.987224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.601 [2024-10-01 15:55:44.987408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.601 [2024-10-01 15:55:44.987569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.601 [2024-10-01 15:55:44.987577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.601 [2024-10-01 15:55:44.987585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.602 [2024-10-01 15:55:44.990008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.602 [2024-10-01 15:55:44.999191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.602 [2024-10-01 15:55:44.999804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.602 [2024-10-01 15:55:44.999835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.602 [2024-10-01 15:55:44.999846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.602 [2024-10-01 15:55:45.000041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.602 [2024-10-01 15:55:45.000202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.602 [2024-10-01 15:55:45.000210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.602 [2024-10-01 15:55:45.000219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.602 [2024-10-01 15:55:45.002635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.602 [2024-10-01 15:55:45.011822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.602 [2024-10-01 15:55:45.012324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.602 [2024-10-01 15:55:45.012340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.602 [2024-10-01 15:55:45.012348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.602 [2024-10-01 15:55:45.012518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.602 [2024-10-01 15:55:45.012677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.602 [2024-10-01 15:55:45.012684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.602 [2024-10-01 15:55:45.012696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.602 [2024-10-01 15:55:45.015113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.602 [2024-10-01 15:55:45.024438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.602 [2024-10-01 15:55:45.024996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.602 [2024-10-01 15:55:45.025027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.602 [2024-10-01 15:55:45.025039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.602 [2024-10-01 15:55:45.025231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.602 [2024-10-01 15:55:45.025392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.602 [2024-10-01 15:55:45.025400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.602 [2024-10-01 15:55:45.025408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.602 [2024-10-01 15:55:45.027831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.602 [2024-10-01 15:55:45.037024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.602 [2024-10-01 15:55:45.037580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.602 [2024-10-01 15:55:45.037611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.602 [2024-10-01 15:55:45.037622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.602 [2024-10-01 15:55:45.037806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.602 [2024-10-01 15:55:45.037974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.602 [2024-10-01 15:55:45.037982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.602 [2024-10-01 15:55:45.037991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.602 [2024-10-01 15:55:45.040408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.602 [2024-10-01 15:55:45.049597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.602 [2024-10-01 15:55:45.050060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.602 [2024-10-01 15:55:45.050076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.602 [2024-10-01 15:55:45.050084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.602 [2024-10-01 15:55:45.050253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.602 [2024-10-01 15:55:45.050411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.602 [2024-10-01 15:55:45.050419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.602 [2024-10-01 15:55:45.050427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.602 [2024-10-01 15:55:45.052839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.866 [2024-10-01 15:55:45.062174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.866 [2024-10-01 15:55:45.062632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-10-01 15:55:45.062646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.866 [2024-10-01 15:55:45.062654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.866 [2024-10-01 15:55:45.062824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.866 [2024-10-01 15:55:45.062986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.866 [2024-10-01 15:55:45.062994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.866 [2024-10-01 15:55:45.063002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.866 [2024-10-01 15:55:45.065424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.866 [2024-10-01 15:55:45.074752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.866 [2024-10-01 15:55:45.075341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-10-01 15:55:45.075372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.866 [2024-10-01 15:55:45.075383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.866 [2024-10-01 15:55:45.075570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.866 [2024-10-01 15:55:45.075731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.866 [2024-10-01 15:55:45.075739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.866 [2024-10-01 15:55:45.075747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.866 [2024-10-01 15:55:45.078171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.866 [2024-10-01 15:55:45.087357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.866 [2024-10-01 15:55:45.087960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-10-01 15:55:45.087991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.866 [2024-10-01 15:55:45.088003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.866 [2024-10-01 15:55:45.088190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.866 [2024-10-01 15:55:45.088351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.866 [2024-10-01 15:55:45.088358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.866 [2024-10-01 15:55:45.088367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.866 [2024-10-01 15:55:45.090790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.866 [2024-10-01 15:55:45.099977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.866 [2024-10-01 15:55:45.100571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-10-01 15:55:45.100602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.866 [2024-10-01 15:55:45.100613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.866 [2024-10-01 15:55:45.100802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.866 [2024-10-01 15:55:45.100972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.866 [2024-10-01 15:55:45.100981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.866 [2024-10-01 15:55:45.100989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.866 [2024-10-01 15:55:45.103407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.866 [2024-10-01 15:55:45.112594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.866 [2024-10-01 15:55:45.113202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-10-01 15:55:45.113233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.866 [2024-10-01 15:55:45.113244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.866 [2024-10-01 15:55:45.113428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.866 [2024-10-01 15:55:45.113588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.866 [2024-10-01 15:55:45.113596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.866 [2024-10-01 15:55:45.113604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.866 [2024-10-01 15:55:45.116029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.125218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.125730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.125746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.125754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.125929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.126088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.126095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.126103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.128516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.137843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.138347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.138363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.138370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.138538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.138697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.138704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.138716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.141133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.150455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.150939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.150961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.150970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.151142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.151301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.151308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.151317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.153733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.163068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.163678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.163709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.163720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.163911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.164072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.164080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.164088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.166516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.175706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.176279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.176310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.176321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.176509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.176670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.176678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.176687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.179110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.188303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.188773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.188793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.188801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.188976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.189136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.189144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.189153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.191569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.200891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.201403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.201418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.201426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.201599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.201759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.201767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.201775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.204196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.213518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.214131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.214162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.214173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.214364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.214529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.214537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.214546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.216968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.226169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.226671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.226687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.226695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.226862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.227031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.227038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.227046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.229461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.238786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.239339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.239371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.867 [2024-10-01 15:55:45.239382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.867 [2024-10-01 15:55:45.239568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.867 [2024-10-01 15:55:45.239729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.867 [2024-10-01 15:55:45.239737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.867 [2024-10-01 15:55:45.239745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.867 [2024-10-01 15:55:45.242171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.867 [2024-10-01 15:55:45.251362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.867 [2024-10-01 15:55:45.251971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-10-01 15:55:45.252002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.868 [2024-10-01 15:55:45.252014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.868 [2024-10-01 15:55:45.252201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.868 [2024-10-01 15:55:45.252362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.868 [2024-10-01 15:55:45.252370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.868 [2024-10-01 15:55:45.252378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.868 [2024-10-01 15:55:45.254800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.868 7238.25 IOPS, 28.27 MiB/s [2024-10-01 15:55:45.265159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.868 [2024-10-01 15:55:45.265768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-10-01 15:55:45.265799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.868 [2024-10-01 15:55:45.265809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.868 [2024-10-01 15:55:45.266002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.868 [2024-10-01 15:55:45.266164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.868 [2024-10-01 15:55:45.266172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.868 [2024-10-01 15:55:45.266180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.868 [2024-10-01 15:55:45.268598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.868 [2024-10-01 15:55:45.277793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.868 [2024-10-01 15:55:45.278290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-10-01 15:55:45.278306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.868 [2024-10-01 15:55:45.278314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.868 [2024-10-01 15:55:45.278483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.868 [2024-10-01 15:55:45.278642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.868 [2024-10-01 15:55:45.278650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.868 [2024-10-01 15:55:45.278658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.868 [2024-10-01 15:55:45.281074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.868 [2024-10-01 15:55:45.290405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.868 [2024-10-01 15:55:45.290984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-10-01 15:55:45.291016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.868 [2024-10-01 15:55:45.291027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.868 [2024-10-01 15:55:45.291217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.868 [2024-10-01 15:55:45.291378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.868 [2024-10-01 15:55:45.291386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.868 [2024-10-01 15:55:45.291394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.868 [2024-10-01 15:55:45.293815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.868 [2024-10-01 15:55:45.303019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.868 [2024-10-01 15:55:45.303622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-10-01 15:55:45.303653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.868 [2024-10-01 15:55:45.303664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.868 [2024-10-01 15:55:45.303849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.868 [2024-10-01 15:55:45.304016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.868 [2024-10-01 15:55:45.304024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.868 [2024-10-01 15:55:45.304032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.868 [2024-10-01 15:55:45.306452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.868 [2024-10-01 15:55:45.315639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.868 [2024-10-01 15:55:45.316271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-10-01 15:55:45.316303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:05.868 [2024-10-01 15:55:45.316321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:05.868 [2024-10-01 15:55:45.316506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:05.868 [2024-10-01 15:55:45.316667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.868 [2024-10-01 15:55:45.316674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.868 [2024-10-01 15:55:45.316683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.131 [2024-10-01 15:55:45.319106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.131 [2024-10-01 15:55:45.328290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.131 [2024-10-01 15:55:45.328879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.131 [2024-10-01 15:55:45.328915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.131 [2024-10-01 15:55:45.328926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.131 [2024-10-01 15:55:45.329114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.131 [2024-10-01 15:55:45.329274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.131 [2024-10-01 15:55:45.329282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.131 [2024-10-01 15:55:45.329291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.131 [2024-10-01 15:55:45.331711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.131 [2024-10-01 15:55:45.340901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.131 [2024-10-01 15:55:45.341400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.131 [2024-10-01 15:55:45.341416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.131 [2024-10-01 15:55:45.341424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.131 [2024-10-01 15:55:45.341592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.131 [2024-10-01 15:55:45.341750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.131 [2024-10-01 15:55:45.341757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.131 [2024-10-01 15:55:45.341765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.131 [2024-10-01 15:55:45.344182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.131 [2024-10-01 15:55:45.353503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.131 [2024-10-01 15:55:45.354006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.131 [2024-10-01 15:55:45.354022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.131 [2024-10-01 15:55:45.354030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.131 [2024-10-01 15:55:45.354198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.131 [2024-10-01 15:55:45.354356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.131 [2024-10-01 15:55:45.354367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.131 [2024-10-01 15:55:45.354376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.131 [2024-10-01 15:55:45.356792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.366132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.366740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.366771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.366782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.366972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.367133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.367141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.367149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.369569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.378755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.379116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.379132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.379140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.379309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.379467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.379474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.379483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.381904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.391369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.391944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.391975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.391986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.392178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.392339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.392346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.392354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.394776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.403971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.404573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.404604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.404615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.404802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.404970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.404978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.404986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.407403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.416585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.417189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.417220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.417231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.417418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.417579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.417587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.417595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.420016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.429203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.429784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.429814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.429825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.430019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.430180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.430188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.430196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.432615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.441794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.442454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.442486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.442498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.442690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.442852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.442861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.442869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.445294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.454480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.455018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.455049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.455061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.455254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.455418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.455426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.455434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.457860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.132 [2024-10-01 15:55:45.467068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.132 [2024-10-01 15:55:45.467534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.132 [2024-10-01 15:55:45.467550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.132 [2024-10-01 15:55:45.467558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.132 [2024-10-01 15:55:45.467726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.132 [2024-10-01 15:55:45.467884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.132 [2024-10-01 15:55:45.467891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.132 [2024-10-01 15:55:45.467908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.132 [2024-10-01 15:55:45.470321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.479634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.480267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.480298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.480309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.480495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.480656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.480664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.480676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.483099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.492309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.492922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.492952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.492963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.493152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.493313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.493321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.493330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.495751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.504931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.505535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.505566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.505578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.505762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.505929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.505937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.505945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.508363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.517537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.518193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.518224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.518235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.518420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.518581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.518588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.518596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.521022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.530203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.530725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.530741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.530749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.530922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.531081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.531089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.531097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.533508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.542818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.543393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.543424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.543435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.543620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.543780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.543788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.543797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.546221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.555402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.555909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.555925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.555934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.556101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.556259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.556267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.556274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.558699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.568026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.568566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.568597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.568608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.568795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.568963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.568971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.568980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.571393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.133 [2024-10-01 15:55:45.580717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.133 [2024-10-01 15:55:45.581285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.133 [2024-10-01 15:55:45.581317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.133 [2024-10-01 15:55:45.581328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.133 [2024-10-01 15:55:45.581514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.133 [2024-10-01 15:55:45.581675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.133 [2024-10-01 15:55:45.581683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.133 [2024-10-01 15:55:45.581691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.133 [2024-10-01 15:55:45.584114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.396 [2024-10-01 15:55:45.593297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.396 [2024-10-01 15:55:45.593908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.396 [2024-10-01 15:55:45.593939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.396 [2024-10-01 15:55:45.593950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.396 [2024-10-01 15:55:45.594135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.396 [2024-10-01 15:55:45.594296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.396 [2024-10-01 15:55:45.594304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.396 [2024-10-01 15:55:45.594312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.396 [2024-10-01 15:55:45.596731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.396 [2024-10-01 15:55:45.605919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.396 [2024-10-01 15:55:45.606483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.396 [2024-10-01 15:55:45.606514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.396 [2024-10-01 15:55:45.606525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.396 [2024-10-01 15:55:45.606711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.396 [2024-10-01 15:55:45.606873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.396 [2024-10-01 15:55:45.606881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.396 [2024-10-01 15:55:45.606902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.396 [2024-10-01 15:55:45.609327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.396 [2024-10-01 15:55:45.618511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.396 [2024-10-01 15:55:45.619022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.396 [2024-10-01 15:55:45.619053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.396 [2024-10-01 15:55:45.619065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.396 [2024-10-01 15:55:45.619257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.396 [2024-10-01 15:55:45.619418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.396 [2024-10-01 15:55:45.619426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.396 [2024-10-01 15:55:45.619435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.396 [2024-10-01 15:55:45.621859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.396 [2024-10-01 15:55:45.631190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.396 [2024-10-01 15:55:45.631697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.396 [2024-10-01 15:55:45.631713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.396 [2024-10-01 15:55:45.631721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.396 [2024-10-01 15:55:45.631888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.396 [2024-10-01 15:55:45.632052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.396 [2024-10-01 15:55:45.632060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.396 [2024-10-01 15:55:45.632068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.396 [2024-10-01 15:55:45.634483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.396 [2024-10-01 15:55:45.643797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.396 [2024-10-01 15:55:45.644171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.396 [2024-10-01 15:55:45.644185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.396 [2024-10-01 15:55:45.644193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.396 [2024-10-01 15:55:45.644360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.396 [2024-10-01 15:55:45.644518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.396 [2024-10-01 15:55:45.644525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.396 [2024-10-01 15:55:45.644533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.396 [2024-10-01 15:55:45.646947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.396 [2024-10-01 15:55:45.656406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.656904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.656923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.656931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.657101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.657258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.657266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.657274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.659697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.669027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.669528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.669542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.669550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.669717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.669875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.669882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.669890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.672307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.681625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.682219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.682249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.682260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.682445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.682605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.682613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.682621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.685045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.694224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.694837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.694868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.694879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.695077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.695246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.695255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.695264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.697688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.706872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.707508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.707539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.707551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.707736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.707904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.707912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.707920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.710340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.719519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.720129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.720159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.720170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.720355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.720517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.720525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.720534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.722959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.732138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.732741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.732772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.732783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.732976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.733137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.733145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.733153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.735574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.744759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.745375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.745406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.745418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.745602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.745762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.745770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.745778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.748206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.757395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.758000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.397 [2024-10-01 15:55:45.758032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.397 [2024-10-01 15:55:45.758042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.397 [2024-10-01 15:55:45.758227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.397 [2024-10-01 15:55:45.758387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.397 [2024-10-01 15:55:45.758395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.397 [2024-10-01 15:55:45.758403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.397 [2024-10-01 15:55:45.760835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.397 [2024-10-01 15:55:45.770025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.397 [2024-10-01 15:55:45.770625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.398 [2024-10-01 15:55:45.770656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.398 [2024-10-01 15:55:45.770667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.398 [2024-10-01 15:55:45.770852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.398 [2024-10-01 15:55:45.771022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.398 [2024-10-01 15:55:45.771030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.398 [2024-10-01 15:55:45.771039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.398 [2024-10-01 15:55:45.773453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.398 [2024-10-01 15:55:45.782625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.398 [2024-10-01 15:55:45.783231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.398 [2024-10-01 15:55:45.783261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.398 [2024-10-01 15:55:45.783276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.398 [2024-10-01 15:55:45.783460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.398 [2024-10-01 15:55:45.783620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.398 [2024-10-01 15:55:45.783628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.398 [2024-10-01 15:55:45.783636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.398 [2024-10-01 15:55:45.786063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.398 [2024-10-01 15:55:45.795243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.398 [2024-10-01 15:55:45.795805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.398 [2024-10-01 15:55:45.795836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.398 [2024-10-01 15:55:45.795847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.398 [2024-10-01 15:55:45.796041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.398 [2024-10-01 15:55:45.796203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.398 [2024-10-01 15:55:45.796211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.398 [2024-10-01 15:55:45.796219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.398 [2024-10-01 15:55:45.798637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.398 [2024-10-01 15:55:45.807813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.398 [2024-10-01 15:55:45.808387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.398 [2024-10-01 15:55:45.808418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.398 [2024-10-01 15:55:45.808429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.398 [2024-10-01 15:55:45.808615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.398 [2024-10-01 15:55:45.808776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.398 [2024-10-01 15:55:45.808784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.398 [2024-10-01 15:55:45.808792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.398 [2024-10-01 15:55:45.811214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.398 [2024-10-01 15:55:45.820392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.398 [2024-10-01 15:55:45.820903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.398 [2024-10-01 15:55:45.820920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.398 [2024-10-01 15:55:45.820928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.398 [2024-10-01 15:55:45.821098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.398 [2024-10-01 15:55:45.821258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.398 [2024-10-01 15:55:45.821269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.398 [2024-10-01 15:55:45.821277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.398 [2024-10-01 15:55:45.823695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.398 [2024-10-01 15:55:45.833008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.398 [2024-10-01 15:55:45.833472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.398 [2024-10-01 15:55:45.833487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.398 [2024-10-01 15:55:45.833495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.398 [2024-10-01 15:55:45.833662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.398 [2024-10-01 15:55:45.833819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.398 [2024-10-01 15:55:45.833827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.398 [2024-10-01 15:55:45.833835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.398 [2024-10-01 15:55:45.836323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.398 [2024-10-01 15:55:45.845647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.398 [2024-10-01 15:55:45.846177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.398 [2024-10-01 15:55:45.846192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.398 [2024-10-01 15:55:45.846200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.398 [2024-10-01 15:55:45.846368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.398 [2024-10-01 15:55:45.846526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.398 [2024-10-01 15:55:45.846534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.398 [2024-10-01 15:55:45.846542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.398 [2024-10-01 15:55:45.848961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.660 [2024-10-01 15:55:45.858300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.660 [2024-10-01 15:55:45.858643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.660 [2024-10-01 15:55:45.858657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.660 [2024-10-01 15:55:45.858666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.660 [2024-10-01 15:55:45.858832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.660 [2024-10-01 15:55:45.858995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.660 [2024-10-01 15:55:45.859003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.660 [2024-10-01 15:55:45.859011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.660 [2024-10-01 15:55:45.861439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.660 [2024-10-01 15:55:45.870932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.871439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.871454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.871462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.871630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.871789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.871796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.871803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.874221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.883543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.884038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.884053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.884061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.884230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.884388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.884395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.884403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.886815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.896141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.896740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.896770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.896782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.896974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.897137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.897145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.897154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.899573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.908762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.909372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.909403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.909414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.909602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.909763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.909770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.909779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.912203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.921381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.921947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.921979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.921989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.922175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.922338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.922346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.922355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.924778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.934010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.934556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.934587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.934598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.934783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.934950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.934959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.934967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.937386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.946702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.947174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.947191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.947199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.947368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.947527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.947535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.947551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.949972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.959301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.959880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.959917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.959927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.960115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.960285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.960294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.661 [2024-10-01 15:55:45.960302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.661 [2024-10-01 15:55:45.962723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.661 [2024-10-01 15:55:45.971920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.661 [2024-10-01 15:55:45.972432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.661 [2024-10-01 15:55:45.972447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.661 [2024-10-01 15:55:45.972456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.661 [2024-10-01 15:55:45.972625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.661 [2024-10-01 15:55:45.972783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.661 [2024-10-01 15:55:45.972791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:45.972799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:45.975216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:45.984531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:45.985001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:45.985017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:45.985025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.662 [2024-10-01 15:55:45.985192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.662 [2024-10-01 15:55:45.985351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.662 [2024-10-01 15:55:45.985358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:45.985366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:45.987784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:45.997181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:45.997784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:45.997815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:45.997826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.662 [2024-10-01 15:55:45.998020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.662 [2024-10-01 15:55:45.998181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.662 [2024-10-01 15:55:45.998189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:45.998197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:46.000617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:46.009794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:46.010410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:46.010441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:46.010452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.662 [2024-10-01 15:55:46.010638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.662 [2024-10-01 15:55:46.010799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.662 [2024-10-01 15:55:46.010807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:46.010815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:46.013236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:46.022412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:46.023040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:46.023070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:46.023081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.662 [2024-10-01 15:55:46.023267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.662 [2024-10-01 15:55:46.023427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.662 [2024-10-01 15:55:46.023435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:46.023444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:46.025864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:46.035048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:46.035630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:46.035661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:46.035673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.662 [2024-10-01 15:55:46.035857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.662 [2024-10-01 15:55:46.036028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.662 [2024-10-01 15:55:46.036036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:46.036044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:46.038462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:46.047638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:46.048242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:46.048273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:46.048283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.662 [2024-10-01 15:55:46.048468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.662 [2024-10-01 15:55:46.048629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.662 [2024-10-01 15:55:46.048636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:46.048645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:46.051069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:46.060253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:46.060860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:46.060891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:46.060909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.662 [2024-10-01 15:55:46.061106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.662 [2024-10-01 15:55:46.061268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.662 [2024-10-01 15:55:46.061276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.662 [2024-10-01 15:55:46.061284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.662 [2024-10-01 15:55:46.063700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.662 [2024-10-01 15:55:46.072883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.662 [2024-10-01 15:55:46.073490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.662 [2024-10-01 15:55:46.073521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.662 [2024-10-01 15:55:46.073531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.663 [2024-10-01 15:55:46.073715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.663 [2024-10-01 15:55:46.073877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.663 [2024-10-01 15:55:46.073886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.663 [2024-10-01 15:55:46.073900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.663 [2024-10-01 15:55:46.076322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.663 [2024-10-01 15:55:46.085501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.663 [2024-10-01 15:55:46.086017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.663 [2024-10-01 15:55:46.086048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.663 [2024-10-01 15:55:46.086059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.663 [2024-10-01 15:55:46.086246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.663 [2024-10-01 15:55:46.086406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.663 [2024-10-01 15:55:46.086414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.663 [2024-10-01 15:55:46.086422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.663 [2024-10-01 15:55:46.088844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.663 [2024-10-01 15:55:46.098166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.663 [2024-10-01 15:55:46.098669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.663 [2024-10-01 15:55:46.098685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.663 [2024-10-01 15:55:46.098694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.663 [2024-10-01 15:55:46.098861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.663 [2024-10-01 15:55:46.099027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.663 [2024-10-01 15:55:46.099035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.663 [2024-10-01 15:55:46.099043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.663 [2024-10-01 15:55:46.101457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.663 [2024-10-01 15:55:46.110781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.663 [2024-10-01 15:55:46.111304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.663 [2024-10-01 15:55:46.111319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.663 [2024-10-01 15:55:46.111327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.663 [2024-10-01 15:55:46.111495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.663 [2024-10-01 15:55:46.111653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.663 [2024-10-01 15:55:46.111661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.663 [2024-10-01 15:55:46.111668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.924 [2024-10-01 15:55:46.114089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.924 [2024-10-01 15:55:46.123413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.924 [2024-10-01 15:55:46.123913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.924 [2024-10-01 15:55:46.123933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.123942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.124110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.124269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.124276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.124284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.126702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.136020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.136619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.136650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.136661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.136846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.137014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.137023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.137031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.139443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.148622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.149182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.149213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.149224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.149410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.149571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.149579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.149587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.152009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.161217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.161828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.161859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.161870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.162063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.162230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.162238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.162246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.164666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.173860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.174374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.174404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.174415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.174601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.174762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.174770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.174778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.177202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.186520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.187118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.187149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.187160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.187344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.187505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.187512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.187520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.189944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.199120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.199590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.199622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.199633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.199819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.199989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.199998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.200007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.202428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.211765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.212337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.212368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.212380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.212565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.212726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.212735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.212744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.215172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.224356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.224980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.225011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.225022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.225211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.225372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.225380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.225388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.227810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.236991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.237556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.237587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.237598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.925 [2024-10-01 15:55:46.237783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.925 [2024-10-01 15:55:46.237951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.925 [2024-10-01 15:55:46.237959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.925 [2024-10-01 15:55:46.237967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.925 [2024-10-01 15:55:46.240383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.925 [2024-10-01 15:55:46.249561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.925 [2024-10-01 15:55:46.250136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.925 [2024-10-01 15:55:46.250167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.925 [2024-10-01 15:55:46.250182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.250366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.250527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.250534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.250542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.252964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.262148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.262763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.262794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.262804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.262996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.263162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.263171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.263179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 5790.60 IOPS, 22.62 MiB/s [2024-10-01 15:55:46.266739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.274803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.275367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.275398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.275408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.275593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.275754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.275762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.275770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.278192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.287377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.287986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.288017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.288029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.288213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.288375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.288386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.288395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.290816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.299996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.300580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.300611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.300623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.300807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.300975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.300983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.300991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.303408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.312585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.313191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.313222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.313233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.313417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.313578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.313586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.313594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.316019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.325197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.325814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.325845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.325856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.326052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.326215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.326223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.326232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.328648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.337828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.338438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.338469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.338479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.338664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.338825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.338832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.338841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.341263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.350440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.351069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.351101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.351112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.351295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.351456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.926 [2024-10-01 15:55:46.351463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.926 [2024-10-01 15:55:46.351471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.926 [2024-10-01 15:55:46.353895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.926 [2024-10-01 15:55:46.363079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.926 [2024-10-01 15:55:46.363553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.926 [2024-10-01 15:55:46.363569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.926 [2024-10-01 15:55:46.363578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.926 [2024-10-01 15:55:46.363745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.926 [2024-10-01 15:55:46.363909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.927 [2024-10-01 15:55:46.363917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.927 [2024-10-01 15:55:46.363925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.927 [2024-10-01 15:55:46.366339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.927 [2024-10-01 15:55:46.375660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.927 [2024-10-01 15:55:46.376134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.927 [2024-10-01 15:55:46.376149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:06.927 [2024-10-01 15:55:46.376157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:06.927 [2024-10-01 15:55:46.376328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:06.927 [2024-10-01 15:55:46.376487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.927 [2024-10-01 15:55:46.376494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.927 [2024-10-01 15:55:46.376502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.188 [2024-10-01 15:55:46.378921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.188 [2024-10-01 15:55:46.388238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.188 [2024-10-01 15:55:46.388736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-10-01 15:55:46.388751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.188 [2024-10-01 15:55:46.388759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.388933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.389091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.389099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.389107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.391524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.400842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.401307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.401321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.401329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.401496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.401654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.401661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.401669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.404088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.413543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.414014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.414028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.414036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.414204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.414361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.414369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.414380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.416795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.426111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.426710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.426741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.426751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.426945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.427107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.427114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.427123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.429537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.438728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.439335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.439366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.439377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.439561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.439723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.439732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.439740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.442165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.451362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.451878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.451901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.451910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.452083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.452246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.452254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.452262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.454684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.464028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.464611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.464642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.464654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.464842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.465011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.465020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.465030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.467451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.476656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.477248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.477279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.477291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.477476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.477639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.477648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.477656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.480080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.489406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.490035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.490066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.490077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.490264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.490425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.490433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.490441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.492863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.502052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.502659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.502691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.502702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.502889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.503060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.189 [2024-10-01 15:55:46.503068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.189 [2024-10-01 15:55:46.503077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.189 [2024-10-01 15:55:46.505493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.189 [2024-10-01 15:55:46.514673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.189 [2024-10-01 15:55:46.515287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-10-01 15:55:46.515317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.189 [2024-10-01 15:55:46.515328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.189 [2024-10-01 15:55:46.515513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.189 [2024-10-01 15:55:46.515674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.515683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.515691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.518115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.527296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.527923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.527955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.527967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.528152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.528313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.528321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.528329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.530757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.539949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.540510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.540541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.540551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.540735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.540903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.540912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.540920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.543340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.552525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.553002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.553019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.553027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.553196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.553354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.553362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.553370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.555785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.565142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.565651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.565666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.565674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.565842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.566005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.566013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.566021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.568431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.577761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.578317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.578348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.578359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.578543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.578705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.578712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.578721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.581149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.590341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.590802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.590824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.590834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.591008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.591169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.591176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.591185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.593600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.602934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.603432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.603446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.603454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.603623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.603781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.603788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.603796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.606217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.615552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.616024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.616039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.616047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.616214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.616373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.616380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.616388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.618806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.628140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.628694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.628725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.628736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.628929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.629094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.629102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.629111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.190 [2024-10-01 15:55:46.631531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.190 [2024-10-01 15:55:46.640725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.190 [2024-10-01 15:55:46.641221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-10-01 15:55:46.641238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.190 [2024-10-01 15:55:46.641246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.190 [2024-10-01 15:55:46.641414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.190 [2024-10-01 15:55:46.641573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.190 [2024-10-01 15:55:46.641581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.190 [2024-10-01 15:55:46.641589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.453 [2024-10-01 15:55:46.644010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.453 [2024-10-01 15:55:46.653341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.453 [2024-10-01 15:55:46.653810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.453 [2024-10-01 15:55:46.653824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.453 [2024-10-01 15:55:46.653833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.453 [2024-10-01 15:55:46.654006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.453 [2024-10-01 15:55:46.654164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.453 [2024-10-01 15:55:46.654171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.453 [2024-10-01 15:55:46.654179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.453 [2024-10-01 15:55:46.656595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.453 [2024-10-01 15:55:46.665935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.453 [2024-10-01 15:55:46.666458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.453 [2024-10-01 15:55:46.666473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.453 [2024-10-01 15:55:46.666481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.453 [2024-10-01 15:55:46.666653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.453 [2024-10-01 15:55:46.666814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.453 [2024-10-01 15:55:46.666821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.453 [2024-10-01 15:55:46.666829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.453 [2024-10-01 15:55:46.669269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.453 [2024-10-01 15:55:46.678614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.453 [2024-10-01 15:55:46.679182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.453 [2024-10-01 15:55:46.679214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.453 [2024-10-01 15:55:46.679225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.453 [2024-10-01 15:55:46.679415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.453 [2024-10-01 15:55:46.679580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.453 [2024-10-01 15:55:46.679589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.453 [2024-10-01 15:55:46.679597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.453 [2024-10-01 15:55:46.682029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.453 [2024-10-01 15:55:46.691236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.453 [2024-10-01 15:55:46.691700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.453 [2024-10-01 15:55:46.691716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.453 [2024-10-01 15:55:46.691725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.453 [2024-10-01 15:55:46.691898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.453 [2024-10-01 15:55:46.692062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.692070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.692078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.694498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.703833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.704394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.704426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.704438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.704624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.704785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.704794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.704803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.707232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.716435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.716939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.716956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.716968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.717140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.717301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.717308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.717316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.719735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.729073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.729567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.729582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.729590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.729758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.729921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.729929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.729937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.732356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.741692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.742255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.742286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.742297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.742482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.742642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.742650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.742658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.745084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.754273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.754878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.754914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.754925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.755110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.755271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.755283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.755292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.757711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.766906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.767416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.767433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.767441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.767608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.767766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.767774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.767783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.770211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.779535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.780023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.780054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.780065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.780255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.780416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.780424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.780432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.782856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.792186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.792695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.792712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.792720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.792889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.793055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.793062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.793070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.795505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.804829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.805330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.805345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.805353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.805521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.805679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.805687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.805695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.808115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.454 [2024-10-01 15:55:46.817435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.454 [2024-10-01 15:55:46.818004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.454 [2024-10-01 15:55:46.818035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.454 [2024-10-01 15:55:46.818046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.454 [2024-10-01 15:55:46.818234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.454 [2024-10-01 15:55:46.818395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.454 [2024-10-01 15:55:46.818403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.454 [2024-10-01 15:55:46.818412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.454 [2024-10-01 15:55:46.820835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.455 [2024-10-01 15:55:46.830025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.455 [2024-10-01 15:55:46.830483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.455 [2024-10-01 15:55:46.830499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.455 [2024-10-01 15:55:46.830507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.455 [2024-10-01 15:55:46.830677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.455 [2024-10-01 15:55:46.830835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.455 [2024-10-01 15:55:46.830842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.455 [2024-10-01 15:55:46.830850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.455 [2024-10-01 15:55:46.833273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.455 [2024-10-01 15:55:46.842599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.455 [2024-10-01 15:55:46.843198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.455 [2024-10-01 15:55:46.843229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.455 [2024-10-01 15:55:46.843240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.455 [2024-10-01 15:55:46.843429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.455 [2024-10-01 15:55:46.843590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.455 [2024-10-01 15:55:46.843598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.455 [2024-10-01 15:55:46.843606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.455 [2024-10-01 15:55:46.846032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.455 [2024-10-01 15:55:46.855219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.455 [2024-10-01 15:55:46.855735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.455 [2024-10-01 15:55:46.855751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.455 [2024-10-01 15:55:46.855759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.455 [2024-10-01 15:55:46.855932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.455 [2024-10-01 15:55:46.856091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.455 [2024-10-01 15:55:46.856098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.455 [2024-10-01 15:55:46.856106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.455 [2024-10-01 15:55:46.858517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.455 [2024-10-01 15:55:46.867845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.455 [2024-10-01 15:55:46.868448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.455 [2024-10-01 15:55:46.868479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.455 [2024-10-01 15:55:46.868490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.455 [2024-10-01 15:55:46.868676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.455 [2024-10-01 15:55:46.868837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.455 [2024-10-01 15:55:46.868845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.455 [2024-10-01 15:55:46.868853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.455 [2024-10-01 15:55:46.871286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.455 [2024-10-01 15:55:46.880478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.455 [2024-10-01 15:55:46.880949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.455 [2024-10-01 15:55:46.880966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.455 [2024-10-01 15:55:46.880974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.455 [2024-10-01 15:55:46.881142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.455 [2024-10-01 15:55:46.881300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.455 [2024-10-01 15:55:46.881307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.455 [2024-10-01 15:55:46.881321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.455 [2024-10-01 15:55:46.883737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.455 [2024-10-01 15:55:46.893063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.455 [2024-10-01 15:55:46.893519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.455 [2024-10-01 15:55:46.893535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.455 [2024-10-01 15:55:46.893543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.455 [2024-10-01 15:55:46.893710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.455 [2024-10-01 15:55:46.893867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.455 [2024-10-01 15:55:46.893874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.455 [2024-10-01 15:55:46.893882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.455 [2024-10-01 15:55:46.896303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.455 [2024-10-01 15:55:46.905761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.455 [2024-10-01 15:55:46.906337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.455 [2024-10-01 15:55:46.906368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.455 [2024-10-01 15:55:46.906379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.455 [2024-10-01 15:55:46.906566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.455 [2024-10-01 15:55:46.906727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.455 [2024-10-01 15:55:46.906735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.455 [2024-10-01 15:55:46.906743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 [2024-10-01 15:55:46.909167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 [2024-10-01 15:55:46.918352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 [2024-10-01 15:55:46.918861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:46.918877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:46.918886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:46.919063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:46.919222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.718 [2024-10-01 15:55:46.919230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.718 [2024-10-01 15:55:46.919238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3403060 Killed "${NVMF_APP[@]}" "$@" 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:07.718 [2024-10-01 15:55:46.921656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3404634 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3404634 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3404634 ']' 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:07.718 [2024-10-01 15:55:46.930998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:07.718 15:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.718 [2024-10-01 15:55:46.931529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:46.931546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:46.931556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:46.931723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:46.931881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.718 [2024-10-01 15:55:46.931889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.718 [2024-10-01 15:55:46.931903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 [2024-10-01 15:55:46.934326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 [2024-10-01 15:55:46.943679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 [2024-10-01 15:55:46.944155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:46.944169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:46.944178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:46.944346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:46.944505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.718 [2024-10-01 15:55:46.944512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.718 [2024-10-01 15:55:46.944520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 [2024-10-01 15:55:46.946940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 [2024-10-01 15:55:46.956276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 [2024-10-01 15:55:46.956635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:46.956651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:46.956660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:46.956829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:46.956993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.718 [2024-10-01 15:55:46.957001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.718 [2024-10-01 15:55:46.957008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 [2024-10-01 15:55:46.959430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 [2024-10-01 15:55:46.968925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 [2024-10-01 15:55:46.969394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:46.969409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:46.969417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:46.969587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:46.969746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.718 [2024-10-01 15:55:46.969754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.718 [2024-10-01 15:55:46.969763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 [2024-10-01 15:55:46.972187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 [2024-10-01 15:55:46.980792] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:38:07.718 [2024-10-01 15:55:46.980837] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.718 [2024-10-01 15:55:46.981581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 [2024-10-01 15:55:46.982194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:46.982225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:46.982236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:46.982420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:46.982580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.718 [2024-10-01 15:55:46.982588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.718 [2024-10-01 15:55:46.982597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 [2024-10-01 15:55:46.985028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 [2024-10-01 15:55:46.994229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 [2024-10-01 15:55:46.994785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:46.994820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:46.994831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:46.995026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:46.995189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.718 [2024-10-01 15:55:46.995197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.718 [2024-10-01 15:55:46.995205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.718 [2024-10-01 15:55:46.997622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.718 [2024-10-01 15:55:47.006812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.718 [2024-10-01 15:55:47.007389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.718 [2024-10-01 15:55:47.007420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.718 [2024-10-01 15:55:47.007432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.718 [2024-10-01 15:55:47.007618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.718 [2024-10-01 15:55:47.007780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.007788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.007796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.010217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.018270] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:07.719 [2024-10-01 15:55:47.019433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.020002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.020033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.020045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.020237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.020399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.020407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.020415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.022837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.032113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.032629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.032646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.032654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.032825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.032989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.032997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.033005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.035422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.044746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.045291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.045321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.045332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.045517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.045678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.045686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.045695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.048119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.057453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.057969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.057986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.057994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.058162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.058320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.058328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.058336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.060749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.066246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:07.719 [2024-10-01 15:55:47.070114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.070629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.070645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.070654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.070823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.070987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.070999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.071007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.073422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.082759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.083244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.083280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.083293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.083485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.083647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.083654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.083663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.086095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.094693] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.719 [2024-10-01 15:55:47.094716] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.719 [2024-10-01 15:55:47.094723] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:07.719 [2024-10-01 15:55:47.094729] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:07.719 [2024-10-01 15:55:47.094733] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.719 [2024-10-01 15:55:47.094865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:07.719 [2024-10-01 15:55:47.095037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:07.719 [2024-10-01 15:55:47.095132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.719 [2024-10-01 15:55:47.095492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.095856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.095874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.095884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.096066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.096226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.096234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.096243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.098671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.108142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.108679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.108696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.108711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.108878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.109042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.109050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.719 [2024-10-01 15:55:47.109059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.719 [2024-10-01 15:55:47.111474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.719 [2024-10-01 15:55:47.120822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.719 [2024-10-01 15:55:47.121316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.719 [2024-10-01 15:55:47.121334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.719 [2024-10-01 15:55:47.121342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.719 [2024-10-01 15:55:47.121510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.719 [2024-10-01 15:55:47.121670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.719 [2024-10-01 15:55:47.121677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.720 [2024-10-01 15:55:47.121686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.720 [2024-10-01 15:55:47.124104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.720 [2024-10-01 15:55:47.133441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.720 [2024-10-01 15:55:47.134105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.720 [2024-10-01 15:55:47.134142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.720 [2024-10-01 15:55:47.134154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.720 [2024-10-01 15:55:47.134344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.720 [2024-10-01 15:55:47.134506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.720 [2024-10-01 15:55:47.134514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.720 [2024-10-01 15:55:47.134522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.720 [2024-10-01 15:55:47.136945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.720 [2024-10-01 15:55:47.146139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.720 [2024-10-01 15:55:47.146672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.720 [2024-10-01 15:55:47.146689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.720 [2024-10-01 15:55:47.146698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.720 [2024-10-01 15:55:47.146866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.720 [2024-10-01 15:55:47.147032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.720 [2024-10-01 15:55:47.147045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.720 [2024-10-01 15:55:47.147053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.720 [2024-10-01 15:55:47.149468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.720 [2024-10-01 15:55:47.158793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.720 [2024-10-01 15:55:47.159413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.720 [2024-10-01 15:55:47.159445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.720 [2024-10-01 15:55:47.159457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.720 [2024-10-01 15:55:47.159646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.720 [2024-10-01 15:55:47.159807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.720 [2024-10-01 15:55:47.159815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.720 [2024-10-01 15:55:47.159824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.720 [2024-10-01 15:55:47.162247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.982 [2024-10-01 15:55:47.171461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.982 [2024-10-01 15:55:47.172014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.982 [2024-10-01 15:55:47.172045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.982 [2024-10-01 15:55:47.172056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.982 [2024-10-01 15:55:47.172245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.982 [2024-10-01 15:55:47.172408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.982 [2024-10-01 15:55:47.172416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.982 [2024-10-01 15:55:47.172425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.982 [2024-10-01 15:55:47.174846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.982 [2024-10-01 15:55:47.184039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.982 [2024-10-01 15:55:47.184446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.982 [2024-10-01 15:55:47.184462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.982 [2024-10-01 15:55:47.184471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.982 [2024-10-01 15:55:47.184639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.982 [2024-10-01 15:55:47.184798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.982 [2024-10-01 15:55:47.184806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.982 [2024-10-01 15:55:47.184815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.982 [2024-10-01 15:55:47.187233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.982 [2024-10-01 15:55:47.196706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.982 [2024-10-01 15:55:47.197248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.982 [2024-10-01 15:55:47.197264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.982 [2024-10-01 15:55:47.197272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.982 [2024-10-01 15:55:47.197440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.982 [2024-10-01 15:55:47.197598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.982 [2024-10-01 15:55:47.197605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.982 [2024-10-01 15:55:47.197613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.982 [2024-10-01 15:55:47.200028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.982 [2024-10-01 15:55:47.209352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.982 [2024-10-01 15:55:47.210004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.210036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.210049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.210242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.210405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.210413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.210422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.212846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.222050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.222430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.222446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.222455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.222625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.222785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.222793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.222802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.225224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.234690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.235267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.235298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.235309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.235501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.235662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.235670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.235679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.238105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.247292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.247942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.247973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.247985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.248183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.248345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.248353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.248361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.250782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.259969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.260489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.260505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.260513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.260681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.260838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.260846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.260854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.263278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 4825.50 IOPS, 18.85 MiB/s [2024-10-01 15:55:47.272644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.273221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.273252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.273264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.273449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.273609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.273617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.273630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.276058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.285250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.285871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.285908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.285921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.286109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.286269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.286277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.286286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.288705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.297889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.298483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.298514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.298525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.298709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.298870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.298878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.298886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.301310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.310500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.311125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.311157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.311168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.311352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.311513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.311521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.311529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.313951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.323144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.323665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.983 [2024-10-01 15:55:47.323680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.983 [2024-10-01 15:55:47.323689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.983 [2024-10-01 15:55:47.323858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.983 [2024-10-01 15:55:47.324022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.983 [2024-10-01 15:55:47.324030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.983 [2024-10-01 15:55:47.324038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.983 [2024-10-01 15:55:47.326451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.983 [2024-10-01 15:55:47.335766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.983 [2024-10-01 15:55:47.336276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.336291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.336299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.336466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.336624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.336631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.336639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.339057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.984 [2024-10-01 15:55:47.348375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.984 [2024-10-01 15:55:47.349027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.349058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.349069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.349253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.349413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.349421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.349430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.351853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.984 [2024-10-01 15:55:47.361043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.984 [2024-10-01 15:55:47.361640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.361671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.361683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.361873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.362039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.362048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.362056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.364482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.984 [2024-10-01 15:55:47.373672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.984 [2024-10-01 15:55:47.374287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.374318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.374329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.374514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.374674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.374682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.374690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.377114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.984 [2024-10-01 15:55:47.386299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.984 [2024-10-01 15:55:47.386767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.386798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.386810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.387002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.387163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.387171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.387179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.389597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.984 [2024-10-01 15:55:47.398923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.984 [2024-10-01 15:55:47.399565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.399596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.399608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.399793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.399959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.399967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.399980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.402402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.984 [2024-10-01 15:55:47.411600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.984 [2024-10-01 15:55:47.412202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.412233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.412244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.412429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.412590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.412598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.412607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.415031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.984 [2024-10-01 15:55:47.424224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.984 [2024-10-01 15:55:47.424703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.984 [2024-10-01 15:55:47.424719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:07.984 [2024-10-01 15:55:47.424728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:07.984 [2024-10-01 15:55:47.424902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:07.984 [2024-10-01 15:55:47.425061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.984 [2024-10-01 15:55:47.425069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.984 [2024-10-01 15:55:47.425077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.984 [2024-10-01 15:55:47.427489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.246 [2024-10-01 15:55:47.436809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.246 [2024-10-01 15:55:47.437391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.246 [2024-10-01 15:55:47.437422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.246 [2024-10-01 15:55:47.437434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.246 [2024-10-01 15:55:47.437619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.246 [2024-10-01 15:55:47.437780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.246 [2024-10-01 15:55:47.437788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.246 [2024-10-01 15:55:47.437796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.246 [2024-10-01 15:55:47.440214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.246 [2024-10-01 15:55:47.449400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.246 [2024-10-01 15:55:47.450011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.246 [2024-10-01 15:55:47.450046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.246 [2024-10-01 15:55:47.450057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.246 [2024-10-01 15:55:47.450245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.246 [2024-10-01 15:55:47.450406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.246 [2024-10-01 15:55:47.450413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.246 [2024-10-01 15:55:47.450422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.246 [2024-10-01 15:55:47.452844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.246 [2024-10-01 15:55:47.462034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.246 [2024-10-01 15:55:47.462639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.246 [2024-10-01 15:55:47.462671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.246 [2024-10-01 15:55:47.462683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.246 [2024-10-01 15:55:47.462873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.246 [2024-10-01 15:55:47.463042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.246 [2024-10-01 15:55:47.463051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.246 [2024-10-01 15:55:47.463059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.246 [2024-10-01 15:55:47.465487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.246 [2024-10-01 15:55:47.474678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.246 [2024-10-01 15:55:47.475265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.246 [2024-10-01 15:55:47.475295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.246 [2024-10-01 15:55:47.475307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.246 [2024-10-01 15:55:47.475492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.246 [2024-10-01 15:55:47.475653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.246 [2024-10-01 15:55:47.475660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.246 [2024-10-01 15:55:47.475669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.246 [2024-10-01 15:55:47.478089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.246 [2024-10-01 15:55:47.487278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.246 [2024-10-01 15:55:47.487874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.246 [2024-10-01 15:55:47.487911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.246 [2024-10-01 15:55:47.487923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.246 [2024-10-01 15:55:47.488278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.246 [2024-10-01 15:55:47.488448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.488456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.488464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.490883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.499924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.500404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.500420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.500428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.500598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.500756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.500763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.500771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.503192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.512511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.512875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.512889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.512904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.513071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.513229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.513236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.513244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.515655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.525121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.525634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.525649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.525657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.525824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.525986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.525994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.526002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.528417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.537731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.538329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.538361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.538372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.538556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.538718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.538726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.538734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.541159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.550342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.550991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.551022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.551034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.551221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.551382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.551390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.551398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.553820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.563008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.563577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.563608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.563619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.563804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.563971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.563980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.563988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.566413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.575606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.576174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.576205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.576221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.576404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.576566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.576574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.576583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.579006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.588190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.588812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.588843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.588854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.589051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.589213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.589221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.589229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.591648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.600832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.601456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.601487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.601499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.601686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.601846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.601854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.601862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.247 [2024-10-01 15:55:47.604288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.247 [2024-10-01 15:55:47.613473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.247 [2024-10-01 15:55:47.614014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.247 [2024-10-01 15:55:47.614046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.247 [2024-10-01 15:55:47.614057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.247 [2024-10-01 15:55:47.614248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.247 [2024-10-01 15:55:47.614410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.247 [2024-10-01 15:55:47.614422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.247 [2024-10-01 15:55:47.614430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.248 [2024-10-01 15:55:47.616852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.248 [2024-10-01 15:55:47.626049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.248 [2024-10-01 15:55:47.626679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.248 [2024-10-01 15:55:47.626711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.248 [2024-10-01 15:55:47.626722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.248 [2024-10-01 15:55:47.626915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.248 [2024-10-01 15:55:47.627077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.248 [2024-10-01 15:55:47.627085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.248 [2024-10-01 15:55:47.627093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.248 [2024-10-01 15:55:47.629509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.248 [2024-10-01 15:55:47.638692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.248 [2024-10-01 15:55:47.639284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.248 [2024-10-01 15:55:47.639316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.248 [2024-10-01 15:55:47.639327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.248 [2024-10-01 15:55:47.639512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.248 [2024-10-01 15:55:47.639672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.248 [2024-10-01 15:55:47.639680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.248 [2024-10-01 15:55:47.639688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.248 [2024-10-01 15:55:47.642112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.248 [2024-10-01 15:55:47.651297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.248 [2024-10-01 15:55:47.651922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.248 [2024-10-01 15:55:47.651952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.248 [2024-10-01 15:55:47.651963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.248 [2024-10-01 15:55:47.652150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.248 [2024-10-01 15:55:47.652311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.248 [2024-10-01 15:55:47.652319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.248 [2024-10-01 15:55:47.652328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.248 [2024-10-01 15:55:47.654750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.248 [2024-10-01 15:55:47.663943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.248 [2024-10-01 15:55:47.664466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.248 [2024-10-01 15:55:47.664482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.248 [2024-10-01 15:55:47.664490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.248 [2024-10-01 15:55:47.664657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.248 [2024-10-01 15:55:47.664816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.248 [2024-10-01 15:55:47.664823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.248 [2024-10-01 15:55:47.664831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.248 [2024-10-01 15:55:47.667258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.248 [2024-10-01 15:55:47.676591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.248 [2024-10-01 15:55:47.677203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.248 [2024-10-01 15:55:47.677235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.248 [2024-10-01 15:55:47.677246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.248 [2024-10-01 15:55:47.677431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.248 [2024-10-01 15:55:47.677592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.248 [2024-10-01 15:55:47.677600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.248 [2024-10-01 15:55:47.677608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.248 [2024-10-01 15:55:47.680032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.248 [2024-10-01 15:55:47.689221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.248 [2024-10-01 15:55:47.689689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.248 [2024-10-01 15:55:47.689706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.248 [2024-10-01 15:55:47.689714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.248 [2024-10-01 15:55:47.689882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.248 [2024-10-01 15:55:47.690095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.248 [2024-10-01 15:55:47.690104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.248 [2024-10-01 15:55:47.690112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.248 [2024-10-01 15:55:47.692528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.701862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.702332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.702363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.702374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.702563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.702724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.702732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.702741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.705164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.714491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.715025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.715057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.715068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.715263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.715428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.715437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.715447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.717873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.727068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.727643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.727674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.727687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.727873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.728044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.728053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.728061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.730479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.739667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.740104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.740136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.740148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.740339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.740500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.740509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.740521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.742949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.752275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.752892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.752930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.752941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.753131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.753292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.753300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.753309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.755728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.764919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.765439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.765456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.765465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.765634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.765801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.765809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.765818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.768235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:08.511 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:38:08.511 [2024-10-01 15:55:47.777567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:08.511 [2024-10-01 15:55:47.778051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.778066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.778075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:08.511 [2024-10-01 15:55:47.778243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.511 [2024-10-01 15:55:47.778401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.778409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.778422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.780839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.790170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.790791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.790822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.790834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.791030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.791192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.511 [2024-10-01 15:55:47.791201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.511 [2024-10-01 15:55:47.791210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.511 [2024-10-01 15:55:47.793627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.511 [2024-10-01 15:55:47.802815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.511 [2024-10-01 15:55:47.803300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.511 [2024-10-01 15:55:47.803332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.511 [2024-10-01 15:55:47.803343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.511 [2024-10-01 15:55:47.803528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.511 [2024-10-01 15:55:47.803691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.512 [2024-10-01 15:55:47.803699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.512 [2024-10-01 15:55:47.803707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.512 [2024-10-01 15:55:47.806132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.512 [2024-10-01 15:55:47.815464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.512 [2024-10-01 15:55:47.816139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.512 [2024-10-01 15:55:47.816171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.512 [2024-10-01 15:55:47.816181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.512 [2024-10-01 15:55:47.816366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:08.512 [2024-10-01 15:55:47.816527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.512 [2024-10-01 15:55:47.816536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.512 [2024-10-01 15:55:47.816544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.512 [2024-10-01 15:55:47.819057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.512 [2024-10-01 15:55:47.821445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:08.512 [2024-10-01 15:55:47.828119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.512 [2024-10-01 15:55:47.828631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.512 [2024-10-01 15:55:47.828647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.512 [2024-10-01 15:55:47.828658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.512 [2024-10-01 15:55:47.828826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.512 [2024-10-01 15:55:47.828989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.512 [2024-10-01 15:55:47.828997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.512 [2024-10-01 15:55:47.829006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.512 [2024-10-01 15:55:47.831421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.512 [2024-10-01 15:55:47.840743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.512 [2024-10-01 15:55:47.841353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.512 [2024-10-01 15:55:47.841385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.512 [2024-10-01 15:55:47.841396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.512 [2024-10-01 15:55:47.841583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.512 [2024-10-01 15:55:47.841744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.512 [2024-10-01 15:55:47.841751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.512 [2024-10-01 15:55:47.841760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.512 [2024-10-01 15:55:47.844183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.512 Malloc0 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.512 [2024-10-01 15:55:47.853369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.512 [2024-10-01 15:55:47.853854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.512 [2024-10-01 15:55:47.853886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.512 [2024-10-01 15:55:47.853903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.512 [2024-10-01 15:55:47.854095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.512 [2024-10-01 15:55:47.854257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.512 [2024-10-01 15:55:47.854265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.512 [2024-10-01 15:55:47.854273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.512 [2024-10-01 15:55:47.856690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.512 [2024-10-01 15:55:47.866022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.512 [2024-10-01 15:55:47.866498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.512 [2024-10-01 15:55:47.866514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.512 [2024-10-01 15:55:47.866522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.512 [2024-10-01 15:55:47.866691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.512 [2024-10-01 15:55:47.866849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.512 [2024-10-01 15:55:47.866856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.512 [2024-10-01 15:55:47.866864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.512 [2024-10-01 15:55:47.869293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.512 [2024-10-01 15:55:47.878612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.512 [2024-10-01 15:55:47.879224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.512 [2024-10-01 15:55:47.879255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42e50 with addr=10.0.0.2, port=4420 00:38:08.512 [2024-10-01 15:55:47.879267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42e50 is same with the state(6) to be set 00:38:08.512 [2024-10-01 15:55:47.879453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42e50 (9): Bad file descriptor 00:38:08.512 [2024-10-01 15:55:47.879616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:08.512 [2024-10-01 15:55:47.879624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:08.512 [2024-10-01 15:55:47.879632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.512 [2024-10-01 15:55:47.882058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:08.512 [2024-10-01 15:55:47.884177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.512 15:55:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3403438 00:38:08.512 [2024-10-01 15:55:47.891303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.512 [2024-10-01 15:55:47.927385] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:17.054 4779.86 IOPS, 18.67 MiB/s 5810.12 IOPS, 22.70 MiB/s 6605.56 IOPS, 25.80 MiB/s 7221.20 IOPS, 28.21 MiB/s 7743.45 IOPS, 30.25 MiB/s 8166.92 IOPS, 31.90 MiB/s 8528.38 IOPS, 33.31 MiB/s 8854.21 IOPS, 34.59 MiB/s 9122.93 IOPS, 35.64 MiB/s 00:38:17.054 Latency(us) 00:38:17.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.054 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:17.054 Verification LBA range: start 0x0 length 0x4000 00:38:17.054 Nvme1n1 : 15.01 9122.98 35.64 13448.09 0.00 5651.58 549.55 12997.97 00:38:17.054 =================================================================================================================== 00:38:17.054 Total : 9122.98 35.64 13448.09 0.00 5651.58 549.55 12997.97 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:17.054 rmmod nvme_tcp 00:38:17.054 rmmod nvme_fabrics 00:38:17.054 rmmod nvme_keyring 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 3404634 ']' 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 3404634 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3404634 ']' 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3404634 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:17.054 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3404634 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3404634' 00:38:17.314 killing process with pid 3404634 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3404634 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3404634 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:17.314 15:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:19.857 00:38:19.857 real 0m28.045s 00:38:19.857 user 1m2.762s 00:38:19.857 sys 0m7.570s 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:19.857 ************************************ 00:38:19.857 END TEST nvmf_bdevperf 00:38:19.857 ************************************ 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.857 ************************************ 00:38:19.857 START TEST nvmf_target_disconnect 00:38:19.857 ************************************ 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:19.857 * Looking for test storage... 00:38:19.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:38:19.857 15:55:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:19.857 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:19.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.858 --rc genhtml_branch_coverage=1 00:38:19.858 --rc genhtml_function_coverage=1 00:38:19.858 --rc genhtml_legend=1 00:38:19.858 --rc geninfo_all_blocks=1 00:38:19.858 --rc geninfo_unexecuted_blocks=1 00:38:19.858 00:38:19.858 ' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:19.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.858 --rc genhtml_branch_coverage=1 00:38:19.858 --rc genhtml_function_coverage=1 00:38:19.858 --rc genhtml_legend=1 00:38:19.858 --rc geninfo_all_blocks=1 00:38:19.858 --rc geninfo_unexecuted_blocks=1 00:38:19.858 00:38:19.858 ' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:19.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.858 --rc genhtml_branch_coverage=1 00:38:19.858 --rc genhtml_function_coverage=1 00:38:19.858 --rc genhtml_legend=1 00:38:19.858 --rc geninfo_all_blocks=1 00:38:19.858 --rc geninfo_unexecuted_blocks=1 00:38:19.858 00:38:19.858 ' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:19.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.858 --rc genhtml_branch_coverage=1 00:38:19.858 --rc genhtml_function_coverage=1 00:38:19.858 --rc genhtml_legend=1 00:38:19.858 --rc geninfo_all_blocks=1 00:38:19.858 --rc geninfo_unexecuted_blocks=1 00:38:19.858 00:38:19.858 ' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:19.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:19.858 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.859 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.859 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.859 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:19.859 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:19.859 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:19.859 15:55:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:27.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:27.994 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:27.994 Found net devices under 0000:31:00.0: cvl_0_0 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.994 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:27.995 Found net devices under 0000:31:00.1: cvl_0_1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:27.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:27.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:38:27.995 00:38:27.995 --- 10.0.0.2 ping statistics --- 00:38:27.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.995 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:27.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:27.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:38:27.995 00:38:27.995 --- 10.0.0.1 ping statistics --- 00:38:27.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.995 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:27.995 ************************************ 00:38:27.995 START TEST nvmf_target_disconnect_tc1 00:38:27.995 ************************************ 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:27.995 [2024-10-01 15:56:06.618701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.995 [2024-10-01 15:56:06.618771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1591240 with addr=10.0.0.2, port=4420 00:38:27.995 [2024-10-01 15:56:06.618812] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:27.995 [2024-10-01 15:56:06.618827] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:27.995 [2024-10-01 15:56:06.618838] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:27.995 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:27.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:27.995 Initializing NVMe Controllers 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:27.995 00:38:27.995 real 0m0.133s 00:38:27.995 user 0m0.053s 00:38:27.995 sys 0m0.081s 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:27.995 ************************************ 00:38:27.995 END TEST nvmf_target_disconnect_tc1 00:38:27.995 ************************************ 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:27.995 ************************************ 00:38:27.995 START TEST nvmf_target_disconnect_tc2 00:38:27.995 ************************************ 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:27.995 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3410639 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3410639 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3410639 ']' 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:27.996 15:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.996 [2024-10-01 15:56:06.775860] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:38:27.996 [2024-10-01 15:56:06.775924] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.996 [2024-10-01 15:56:06.813285] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:27.996 [2024-10-01 15:56:06.859699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:27.996 [2024-10-01 15:56:06.892514] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:27.996 [2024-10-01 15:56:06.892551] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:27.996 [2024-10-01 15:56:06.892559] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:27.996 [2024-10-01 15:56:06.892565] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:27.996 [2024-10-01 15:56:06.892571] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:27.996 [2024-10-01 15:56:06.892714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:27.996 [2024-10-01 15:56:06.892843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:27.996 [2024-10-01 15:56:06.892971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:27.996 [2024-10-01 15:56:06.893164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.256 Malloc0 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.256 [2024-10-01 15:56:07.634743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.256 [2024-10-01 15:56:07.675034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:28.256 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.257 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.257 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.257 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3410920 00:38:28.257 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:28.257 15:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:30.814 15:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3410639 00:38:30.814 15:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Write completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Write completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Write completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Write completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Write completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Write completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Write completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 Read completed with error (sct=0, sc=8) 00:38:30.815 starting I/O failed 00:38:30.815 [2024-10-01 15:56:09.708729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.815 [2024-10-01 15:56:09.709120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.709144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.709438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.709450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.709723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.709734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.709854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.709865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.710226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.710236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.710561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.710571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.710851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.710860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.711058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.711068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.711412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.711422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.711592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.711602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.711926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.711937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.712163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.712173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.712454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.712464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.712770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.712780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.713142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.713153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.713449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.713459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.713804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.713816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.714161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.714172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.714478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.714487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.714781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.714791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.715108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.715118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.715312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.715322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.715491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.715503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.715791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.715801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.716146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.716157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.716448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.716458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.716683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.716692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.716885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.716899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.717140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.717150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.717439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.717449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.717781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.717791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.718096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.718106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.718405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.718415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.718735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.718745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.718928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.718938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.719148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.719158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.719470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.719479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.719786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.719796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.719981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.719992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.720287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.720296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.720642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.720651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.720960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.720969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.721318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.721328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.721605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.721615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.721920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.721930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.722243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.722252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.722569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.722579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.722772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.815 [2024-10-01 15:56:09.722783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.815 qpair failed and we were unable to recover it. 00:38:30.815 [2024-10-01 15:56:09.723080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.723090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.723402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.723411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.723683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.723693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.723989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.723999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.724337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.724346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.724660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.724670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.725007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.725017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.725225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.725235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.725492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.725504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.725729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.725739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.725932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.725942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.726234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.726243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.726416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.726426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.726720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.726729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.727040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.727050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.727332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.727342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.727659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.727669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.727962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.727972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.728281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.728290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.728591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.728601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.728900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.728910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.729231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.729241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.729579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.729588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.729873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.729883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.730176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.730186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.730465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.730474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.730756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.730766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.731068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.731077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.731361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.731370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.731647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.731657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.731963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.731973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.732277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.732286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.732613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.732625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.732941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.732953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.733297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.733309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.733609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.733621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.733953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.733965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.734259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.734271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.734572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.734584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.734872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.734884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.735218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.735230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.735518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.735530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.735854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.735866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.736220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.736234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.736400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.736414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.736726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.736739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.737070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.737082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.737401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.737413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.737713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.737727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.738057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.738070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.738375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.738387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.738775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.738787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.739062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.739074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.739251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.739264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.739553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.739565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.739868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.739880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.740221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.740233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.740557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.740569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.740907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.740919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.816 [2024-10-01 15:56:09.741203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.816 [2024-10-01 15:56:09.741215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.816 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.741523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.741535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.741855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.741867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.742207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.742219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.742423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.742435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.742709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.742720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.743025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.743037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.743375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.743387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.743705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.743721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.744026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.744043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.744341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.744357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.744668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.744684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.745038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.745054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.745334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.745351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.745673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.745688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.746009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.746025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.746328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.746344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.746667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.746683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.747001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.747017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.747314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.747330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.747680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.747696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.747992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.748009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.748322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.748338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.748665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.748680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.749047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.749065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.749392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.749409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.749738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.749754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.749937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.749956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.750298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.750314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.750647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.750668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.750963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.750979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.751276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.751293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.751642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.751658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.751972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.751988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.752158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.752174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.752361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.752379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.752547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.752565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.752901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.752918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.753155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.753171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.753489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.753505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.753827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.753847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.754213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.754234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.754569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.754590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.754986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.755007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.755183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.755205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.755410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.755430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.755742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.755762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.756067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.756087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.756328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.756348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.756536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.756556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.756874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.756901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.757206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.757227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.757432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.757452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.757796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.757816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.758107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.758129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.758455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.758476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.758839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.817 [2024-10-01 15:56:09.758860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.817 qpair failed and we were unable to recover it. 00:38:30.817 [2024-10-01 15:56:09.759186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.759207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.759518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.759538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.759906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.759928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.760263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.760284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.760616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.760637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.760952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.760972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.761303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.761324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.761661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.761681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.762049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.762070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.762392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.762412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.762727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.762747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.762941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.762961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.763348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.763372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.763663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.763683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.764004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.764025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.764357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.764377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.764702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.764722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.765037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.765058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.765377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.765397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.765589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.765611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.765950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.765972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.766292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.766312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.766697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.766725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.767086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.767114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.767447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.767475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.767821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.767849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.768179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.768207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.768553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.768581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.768929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.768958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.769320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.769346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.769589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.769617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.769971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.770004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.770381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.770409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.770757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.770785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.771213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.771242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.771588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.771616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.771852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.771880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.772220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.772249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.772618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.772646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.772961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.772991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.773333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.773361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.773693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.773721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.774088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.774117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.774461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.774488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.774819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.774847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.775193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.775222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.775581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.775608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.775934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.775963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.776316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.776344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.776698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.776726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.777076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.777105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.777420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.777447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.777688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.777724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.778106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.778136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.778358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.778389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.778652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.778680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.779031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.779060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.779406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.779435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.779786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.779814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.780162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.780191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.780538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.780566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.780917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.780947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.781155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.781185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.781519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.818 [2024-10-01 15:56:09.781548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.818 qpair failed and we were unable to recover it. 00:38:30.818 [2024-10-01 15:56:09.781926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.781956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.782326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.782355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.782704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.782732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.783152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.783181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.783516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.783544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.783891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.783928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.784269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.784297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.784602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.784630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.784993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.785022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.785362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.785390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.785720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.785748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.786102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.786131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.786477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.786505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.786860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.786888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.787245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.787273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.787645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.787674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.788020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.788049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.788466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.788494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.788837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.788866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.789198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.789227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.789575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.789602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.789959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.789988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.790358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.790386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.790732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.790760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.791109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.791138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.791477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.791505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.791855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.791883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.792231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.792259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.792520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.792558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.792950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.792980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.793219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.793246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.793611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.793639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.793985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.794015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.794355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.794383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.794694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.794722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.794927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.794958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.795277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.795306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.795636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.795663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.796007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.796035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.796383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.796410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.796747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.796775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.797144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.797174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.797506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.797535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.797864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.797892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.798221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.798249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.798665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.798693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.799020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.799048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.799393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.799421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.799673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.799700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.800054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.800083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.800315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.800341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.800640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.800667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.800997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.801026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.801359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.801387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.801745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.801773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.802112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.802143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.802500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.802528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.802859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.802886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.803247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.803275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.803536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.803563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.803908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.819 [2024-10-01 15:56:09.803937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-01 15:56:09.804261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.804289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.804635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.804662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.805006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.805035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.805339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.805366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.805697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.805726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.806071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.806100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.806437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.806465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.806812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.806845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.807186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.807216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.807554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.807582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.807929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.807957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.808303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.808330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.808669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.808697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.809045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.809074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.809414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.809442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.809795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.809836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.810166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.810194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.810528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.810555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.810908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.810937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.811263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.811291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.811637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.811665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.812019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.812054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.812407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.812434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.812785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.812813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.813054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.813082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.813418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.813445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.813732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.813760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.814085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.814114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.814399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.814426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.814784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.814812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.815181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.815210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.815560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.815588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.815938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.815967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.816336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.816364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.816699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.816727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.817173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.817202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.817539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.817566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.817886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.817930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.818265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.818293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.818531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.818558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.818911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.818940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.819284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.819311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.819636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.819663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.820009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.820038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.820385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.820413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.820746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.820773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.821119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.821149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.821502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.821536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.821867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.821903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.822258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.822286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.822630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.822659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.823000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.823028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.823376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.823403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-01 15:56:09.823733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.820 [2024-10-01 15:56:09.823761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.824158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.824186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.824527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.824554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.824917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.824946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.825306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.825333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.825680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.825707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.826041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.826069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.826390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.826417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.826758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.826786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.827104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.827132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.827377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.827409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.827741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.827771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.828107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.828135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.828484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.828512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.828860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.828889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.829227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.829255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.829592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.829620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.829966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.829996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.830335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.830363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.830704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.830732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.831090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.831119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.831363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.831391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.831771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.831799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.832136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.832165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.832488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.832516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.832843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.832871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.833232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.833261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.833598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.833626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.833875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.833925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.834281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.834309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.834653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.834681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.835038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.835067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.835399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.835426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.835676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.835703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.836083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.836117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.836477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.836505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.836746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.836772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.837010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.837038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.837342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.837369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.837719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.837746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.838189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.838218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.838556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.838583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.838921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.838950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.839322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.839349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.839706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.839733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.840091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.840119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.840490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.840517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.840860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.840888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.841233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.841262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.841611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.841638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.841879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.841927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.842251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.842279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.842593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.842621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.842949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.842979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.843310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.843337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.843552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.843582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.843934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.843963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.844296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.844324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.844708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.844736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.844987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.845016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.845365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.845393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.821 [2024-10-01 15:56:09.845620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.821 [2024-10-01 15:56:09.845652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.821 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.845984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.846012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.846371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.846399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.846757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.846785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.847128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.847156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.847379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.847409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.847736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.847765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.848126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.848154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.848500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.848528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.848852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.848881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.849247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.849275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.849619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.849647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.849993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.850022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.850338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.850372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.850711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.850739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.851083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.851113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.851503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.851530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.851827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.851855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.852097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.852125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.852460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.852486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.852696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.852727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.853081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.853110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.853450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.853478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.853839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.853867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.854190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.854219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.854571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.854599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.854974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.855004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.855347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.855376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.855728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.855755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.856111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.856139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.856351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.856381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.856717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.856745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.856962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.856994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.857288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.857316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.857625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.857652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.857904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.857933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.858239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.858266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.858633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.858660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.858865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.858905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.859272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.859301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.859663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.859691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.860033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.860063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.860398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.860427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.860776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.860804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.861154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.861181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.861554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.861582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.861933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.861963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.862375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.862403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.862715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.862744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.863098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.863127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.863467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.863495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.863838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.863866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.864096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.864128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.864457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.864492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.864823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.864851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.865249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.865278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.865608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.865641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.866026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.866056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.866404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.866432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.866796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.866824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.867061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.822 [2024-10-01 15:56:09.867093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.822 qpair failed and we were unable to recover it. 00:38:30.822 [2024-10-01 15:56:09.867452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.867480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.867821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.867848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.868175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.868203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.868563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.868591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.868805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.868836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.869175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.869203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.869551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.869580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.869830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.869862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.870237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.870268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.870678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.870706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.871062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.871091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.871373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.871400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.871758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.871785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.872145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.872173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.872526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.872554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.872874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.872910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.873282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.873309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.873548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.873575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.873887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.873933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.874275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.874308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.874646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.874673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.875067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.875097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.875440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.875468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.875802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.875829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.876156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.876184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.876542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.876569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.876923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.876950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.877331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.877359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.877711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.877739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.878117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.878145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.878435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.878463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.878809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.878836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.879179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.879209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.879555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.879584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.879911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.879946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.880321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.880349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.880697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.880725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.881052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.881079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.881430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.881458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.881802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.881832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.882178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.882208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.882581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.882610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.882945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.882974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.883217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.883248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.883552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.883579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.883941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.883969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.884325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.884354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.884695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.884723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.885070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.885098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.885433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.885460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.885815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.885843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.886212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.886241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.886585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.886613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.886965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.886993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.887225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.887252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.887601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.887628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.888004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.888033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.823 [2024-10-01 15:56:09.888381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.823 [2024-10-01 15:56:09.888407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.823 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.888729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.888756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.889080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.889115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.889450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.889477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.889822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.889850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.890217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.890245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.890591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.890619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.890976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.891005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.891341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.891369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.891755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.891784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.892130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.892159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.892536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.892564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.892905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.892935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.893315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.893342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.893680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.893708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.894054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.894083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.894431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.894459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.894674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.894704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.895010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.895039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.895391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.895419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.895791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.895818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.896194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.896222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.896501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.896529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.896748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.896776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.897023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.897054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.897405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.897432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.897804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.897831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.898186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.898214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.898553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.898580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.898962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.898991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.899325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.899353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.899703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.899731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.900063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.900091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.900331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.900358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.900791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.900819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.901134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.901162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.901518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.901545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.901890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.901935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.902256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.902284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.902531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.902559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.902876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.902915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.903221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.903250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.903603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.903643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.903991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.904020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.904353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.904380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.904619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.904646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.905003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.905031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.905374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.905402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.905732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.905759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.906122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.906151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.906504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.906532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.906862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.906890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.907254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.907282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.907637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.907665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.908007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.908036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.908386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.908413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.908758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.908785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.908995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.909025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.909222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.909251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.909550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.909577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.909937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.909983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.824 [2024-10-01 15:56:09.910311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.824 [2024-10-01 15:56:09.910339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.824 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.910687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.910714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.911064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.911092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.911454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.911482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.911832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.911860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.912209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.912238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.912587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.912615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.912986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.913015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.913347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.913376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.913740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.913769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.913991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.914019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.914341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.914368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.914693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.914721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.914936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.914964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.915330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.915357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.915713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.915740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.916107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.916135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.916488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.916515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.916864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.916892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.917239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.917268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.917609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.917637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.917996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.918030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.918258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.918288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.918623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.918651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.918990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.919018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.919388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.919416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.919753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.919782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.920122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.920150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.920502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.920530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.920917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.920946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.921329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.921356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.921573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.921603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.921929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.921959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.922296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.922323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.922671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.922699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.922960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.922990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.923307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.923334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.923674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.923701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.924165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.924194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.924462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.924489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.924828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.924856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.925276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.925305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.925634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.925662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.925999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.926028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.926364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.926393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.926726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.926754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.926985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.927017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.927374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.927402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.927790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.927819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.928173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.928202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.928543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.928571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.928935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.928964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.929217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.929244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.929593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.929621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.929975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.930004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.930219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.930250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.930583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.930612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.930970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.930999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.931359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.931387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.931740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.931767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.932198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.932227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.932563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.932597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.825 qpair failed and we were unable to recover it. 00:38:30.825 [2024-10-01 15:56:09.932987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.825 [2024-10-01 15:56:09.933016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.933357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.933391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.933725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.933753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.934112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.934141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.934483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.934510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.934882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.934917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.935276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.935303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.935642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.935669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.936051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.936080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.936412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.936440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.936832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.936860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.937259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.937288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.937642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.937670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.938011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.938041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.938381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.938409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.938731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.938760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.939110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.939138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.939508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.939536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.939877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.939912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.940240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.940273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.940643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.940671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.941014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.941043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.941384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.941412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.941627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.941656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.942016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.942046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.942409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.942436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.942781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.942810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.943152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.943181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.943526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.943553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.943993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.944023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.944355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.944383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.944597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.944628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.944963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.944991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.945233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.945260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.945578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.945606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.945956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.945986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.946210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.946241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.946567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.946595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.946952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.946980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.947306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.947339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.947592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.947620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.947846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.947873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.948228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.948256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.948574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.948603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.948963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.948991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.949321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.949349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.949663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.949691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.950013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.950042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.950391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.950419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.950743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.950771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.951010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.951042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.951418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.951446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.951788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.951816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.952221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.952251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.952582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.952610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.952982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.953010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.953302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.826 [2024-10-01 15:56:09.953329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.826 qpair failed and we were unable to recover it. 00:38:30.826 [2024-10-01 15:56:09.953657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.953686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.954035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.954065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.954390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.954418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.954657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.954686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.955020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.955049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.955384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.955412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.955783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.955811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.956166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.956195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.956544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.956571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.956920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.956950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.957287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.957315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.957714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.957741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.958154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.958183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.958512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.958540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.958878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.958914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.959290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.959319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.959638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.959666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.960021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.960065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.960434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.960462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.960837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.960865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.961127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.961158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.961505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.961534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.961875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.961920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.962247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.962275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.962629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.962658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.963008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.963037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.963403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.963431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.963768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.963797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.964150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.964179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.964527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.964555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.964944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.964973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.965344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.965373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.965720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.965749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.966148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.966177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.966551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.966580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.966917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.966947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.967284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.967313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.967661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.967689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.968083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.968112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.968361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.968390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.968723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.968751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.969109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.969139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.969395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.969422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.969655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.969683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.970017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.970045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.970253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.970285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.970505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.970535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.970757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.970785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.971033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.971063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.971435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.971464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.971760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.971787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.972090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.972120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.972431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.972460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.972787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.972815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.973176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.973205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.973549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.973586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.973924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.973953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.974159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.974187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.974487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.974520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.974872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.974911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.975261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.975289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.975637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.975664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.827 [2024-10-01 15:56:09.976003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.827 [2024-10-01 15:56:09.976045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.827 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.976259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.976290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.976650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.976679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.976917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.976949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.977331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.977359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.977704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.977732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.978061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.978089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.978426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.978454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.978759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.978788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.979141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.979170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.979389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.979417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.979788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.979816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.980188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.980217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.980553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.980582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.980932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.980961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.981187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.981214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.981547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.981575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.981914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.981943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.982287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.982315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.982660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.982687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.983032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.983060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.983400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.983428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.983751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.983780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.984126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.984156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.984513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.984542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.984876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.984915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.985331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.985359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.985681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.985709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.985930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.985958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.986269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.986298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.986640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.986667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.987008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.987036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.987407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.987435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.987742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.987770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.988131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.988161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.988481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.988510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.988886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.988922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.989270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.989299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.989649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.989677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.989903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.989934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.990284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.990319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.990635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.990661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.991004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.991034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.991298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.991325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.991656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.991686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.992059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.992089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.992426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.992455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.992869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.992908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.993155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.993186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.993563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.993591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.993919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.993948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.994289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.994316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.994616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.994643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.994846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.994874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.995169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.995198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.995406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.995433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.995769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.995798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.996129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.996159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.996530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.996558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.996904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.996934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.997283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.997310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.997527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.997558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.997880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.997915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.998156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.828 [2024-10-01 15:56:09.998184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.828 qpair failed and we were unable to recover it. 00:38:30.828 [2024-10-01 15:56:09.998572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:09.998601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:09.998979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:09.999008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:09.999335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:09.999363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:09.999574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:09.999604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:09.999834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:09.999863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.000217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.000248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.000475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.000504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.000749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.000780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.001180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.001212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.001552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.001580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.001928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.001958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.003009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.003047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.003404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.003431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.003671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.003701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.004116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.004146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.004490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.004519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.004757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.004792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.005182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.005211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.005570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.005599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.005954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.005983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.006226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.006255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.006591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.006621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.006957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.006987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.007333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.007360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.007740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.007769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.008013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.008042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.008269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.008297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.008641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.008669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.008915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.008945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.009231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.009259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.009644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.009673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.010016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.010046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.010350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.010381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.010618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.010646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.011087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.011116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.011256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.011284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.011577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.011605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.011836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.011864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.012193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.012222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.012564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.012592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.012947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.012977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.013327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.013356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.013507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.013536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.013811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.013840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.014247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.014277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.014543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.014571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.014918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.014948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.015083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.015111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.015500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.015528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.015679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.015707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.015984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.016014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.016377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.016407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.016731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.016760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.016998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.017028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.017272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.017300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.017498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.017534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.017840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.017875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.018242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.018272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.018645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.018674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.829 qpair failed and we were unable to recover it. 00:38:30.829 [2024-10-01 15:56:10.018899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.829 [2024-10-01 15:56:10.018929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.019272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.019301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.019512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.019541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.019844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.019873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.020218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.020247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.020518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.020546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.020868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.020907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.021286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.021315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.021669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.021698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.022060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.022089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.022444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.022472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.022840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.022869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.023235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.023264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.023643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.023671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.024002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.024031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.024352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.024381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.024708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.024736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.025144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.025173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.025493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.025522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.025770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.025799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.026159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.026189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.026545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.026573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.026949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.026978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.027322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.027350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.027682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.027711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.028073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.028102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.028332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.028359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.028723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.028751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.029103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.029132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.029513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.029542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.029752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.029780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.030000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.030030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.030362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.030390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.030752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.030780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.030996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.031024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.031245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.031276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.031604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.031631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.032004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.032039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.032391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.032417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.032646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.032676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.032993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.033022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.033393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.033420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.033773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.033800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.034146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.034175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.034515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.034543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.034858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.034886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.035148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.035177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.035397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.035425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.035783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.035811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.036163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.036193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.036568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.036596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.036847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.036877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.037248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.037277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.037513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.037541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.037698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.037727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.037848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.037875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.038127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.038155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.038474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.038502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.038850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.038878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.039252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.039281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.039512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.039541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.039680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.830 [2024-10-01 15:56:10.039711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.830 qpair failed and we were unable to recover it. 00:38:30.830 [2024-10-01 15:56:10.039920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.039952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.040340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.040369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.040727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.040756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.041091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.041120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.041468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.041497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.041841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.041868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.042195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.042224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.042443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.042474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.042604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.042634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.042890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.042929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.043175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.043203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.043551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.043579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.043928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.043958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.044310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.044336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.044691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.044719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.045016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.045057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.045379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.045407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.045750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.045778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.046115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.046143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.046514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.046541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.046901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.046930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.047268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.047296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.047646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.047674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.048029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.048058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.048410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.048438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.048756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.048785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.049135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.049164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.049480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.049509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.049899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.049929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.050273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.050302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.050645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.050673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.051021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.051052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.051290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.051318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.051633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.051660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.051971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.052001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.052348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.052376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.052723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.052749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.053112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.053141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.053378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.053405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.053708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.053736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.054076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.054105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.054461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.054488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.054849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.054890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.055262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.055291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.055535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.055564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.055885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.055922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.056289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.056316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.056665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.056693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.057034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.057062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.057402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.057430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.057780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.057809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.058235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.058264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.058608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.058636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.059003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.059032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.059377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.059405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.059755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.059783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.060182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.060229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.060586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.831 [2024-10-01 15:56:10.060615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.831 qpair failed and we were unable to recover it. 00:38:30.831 [2024-10-01 15:56:10.060854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.060885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.061224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.061252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.061603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.061632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.061990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.062019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.062262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.062289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.062675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.062703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.063031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.063059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.063416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.063444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.063802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.063830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.064187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.064217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.064419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.064446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.064822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.064850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.065184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.065213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.065589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.065617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.065960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.065991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.066230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.066260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.066498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.066528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.066808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.066840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.067173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.067202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.067576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.067604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.067836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.067862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.068219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.068248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.068595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.068622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.068968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.068996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.069352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.069386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.069606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.069638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.069945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.069974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.070186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.070217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.070554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.070583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.070936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.070965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.071303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.071330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.071659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.071687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.072036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.072066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.072402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.072430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.072843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.072871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.073211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.073240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.073637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.073664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.074016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.074045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.074289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.074320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.074669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.074696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.075020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.075049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.075378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.075406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.075730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.075758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.076127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.076155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.076529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.076557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.076910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.076939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.077289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.077317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.077657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.077685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.078011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.078040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.078397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.078424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.078842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.078869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.079250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.079279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.079405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.079434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.079773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.079801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.080169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.080197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.080418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.080447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.080749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.080776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.081120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.081149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.081476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.832 [2024-10-01 15:56:10.081504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.832 qpair failed and we were unable to recover it. 00:38:30.832 [2024-10-01 15:56:10.081740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.081768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.082165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.082194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.082557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.082585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.082939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.082968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.083333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.083361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.083706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.083740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.084102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.084132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.084389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.084417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.084640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.084672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.085011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.085039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.085369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.085397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.085650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.085681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.086007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.086035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.086331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.086359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.086629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.086657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.087011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.087040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.087398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.087425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.087728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.087761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.088143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.088173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.088404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.088431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.088853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.088880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.089231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.089260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.089614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.089641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.089946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.089975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.090205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.090233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.090570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.090598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.090939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.090967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.091306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.091333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.091582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.091610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.091944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.091972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.092312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.092340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.092696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.092724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.093069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.093098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.093343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.093370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.093754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.093782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.094138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.094167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.094503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.094531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.094857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.094885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.095119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.095147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.095498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.095526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.095873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.095908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.096273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.096301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.096526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.096557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.096886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.096921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.097172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.097199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.097572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.097606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.097955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.097984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.098326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.098355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.098713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.098741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.099113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.099141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.099505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.099533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.099751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.099779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.100140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.100169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.100509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.100536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.100853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.100880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.101124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.101156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.101498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.101526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.101852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.101880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.833 [2024-10-01 15:56:10.102302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.833 [2024-10-01 15:56:10.102330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.833 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.102680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.102709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.103059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.103088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.103436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.103464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.103845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.103873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.104210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.104240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.104467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.104497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.104851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.104879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.105276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.105304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.105631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.105657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.105915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.105945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.106241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.106268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.106633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.106661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.106864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.106892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.107131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.107163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.107538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.107566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.107913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.107943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.108274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.108302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.108537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.108564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.108799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.108827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.109217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.109246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.109576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.109604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.109817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.109848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.110196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.110224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.110442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.110473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.110845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.110873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.111246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.111274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.111623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.111656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.111935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.111968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.112325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.112354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.112782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.112810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.113029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.113057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.113384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.113412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.113765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.113793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.114024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.114054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.114419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.114447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.114812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.114840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.115189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.115218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.115564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.115592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.115837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.115864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.116103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.116140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.116515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.116543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.116887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.116927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.117268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.117296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.117663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.117691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.118034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.118063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.118297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.118326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.118582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.118612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.118738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.118769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.119086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.119116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.119449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.119478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.119835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.119864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.120228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.120258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.120524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.120552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.120811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.120839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.121202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.121231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.121447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.121475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.834 [2024-10-01 15:56:10.121718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.834 [2024-10-01 15:56:10.121746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.834 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.122091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.122121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.122457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.122485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.122738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.122764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.123114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.123142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.123486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.123514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.123722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.123748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.124085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.124114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.124321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.124348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.124678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.124705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.125042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.125076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.125389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.125422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.125749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.125776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.126154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.126184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.126505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.126532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.126871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.126907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.127250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.127278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.127614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.127642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.128013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.128042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.128181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.128212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.128533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.128561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.128785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.128812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.129151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.129180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.129478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.129506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.129859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.129887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.130229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.130258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.130625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.130652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.131007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.131036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.131382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.131411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.131759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.131786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.132015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.132043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.132415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.132443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.132767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.132795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.133166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.133194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.133542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.133569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.133935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.133964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.134317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.134345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.134704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.134732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.135019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.135048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.135381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.135408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.135765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.135792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.136208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.136237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.136652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.136680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.137021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.137051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.137297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.137323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.137669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.137697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.138033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.138064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.138279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.138310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.138532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.138559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.138880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.138932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.139291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.139326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.139713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.139741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.140160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.140189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.140524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.140550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.140871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.140907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.141296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.141324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.141684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.141711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.141981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.142010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.142372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.142401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.142630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.142657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.142918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.142947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.143226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.143253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.143509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.143536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.143892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.143941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.144340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.144369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.144727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.835 [2024-10-01 15:56:10.144755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.835 qpair failed and we were unable to recover it. 00:38:30.835 [2024-10-01 15:56:10.145120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.145150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.145513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.145540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.145780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.145807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.146138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.146167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.146514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.146542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.146903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.146932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.147270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.147298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.147645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.147673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.148022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.148051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.148404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.148432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.148775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.148809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.149177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.149207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.149551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.149579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.149931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.149959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.150312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.150340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.150686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.150714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.150961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.150989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.151243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.151270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.151589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.151617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.151959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.151988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.152336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.152364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.152713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.152740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.153085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.153115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.153468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.153496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.153846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.153879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.154246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.154276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.154625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.154653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.154965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.154994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.155253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.155281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.155611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.155639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.155874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.155928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.156285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.156313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.156554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.156585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.156964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.156994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.157299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.157327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.157547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.157575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.157934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.157964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.158316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.158344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.158693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.158722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.159074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.159103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.159468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.159496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.159832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.159860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.160208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.160238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.160585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.160616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.160963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.160992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.161364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.161392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.161732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.161760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.162124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.162152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.162508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.162535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.162888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.162926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.163243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.163271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.163621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.163651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.163987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.164016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.164261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.164292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.164645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.164673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.165016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.165045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.165383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.165410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.165746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.165775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.166133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.166161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.166477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.166504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.166882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.166918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.836 qpair failed and we were unable to recover it. 00:38:30.836 [2024-10-01 15:56:10.167261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.836 [2024-10-01 15:56:10.167289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.167574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.167601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.167967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.167995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.168338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.168372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.168722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.168751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.169181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.169210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.169440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.169472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.169830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.169858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.170097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.170128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.170496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.170525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.170877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.170913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.171277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.171304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.171525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.171555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.171783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.171813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.172142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.172170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.172401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.172432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.172762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.172790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.173142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.173172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.173531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.173559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.173909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.173938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.174272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.174301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.174638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.174666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.175015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.175044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.175286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.175317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.175675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.175704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.176038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.176068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.176409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.176436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.176801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.176830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.177199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.177229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.177576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.177604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.177962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.178017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.178389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.178417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.178763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.178791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.179116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.179144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.179469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.179497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.179853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.179882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.180234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.180262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.180474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.180504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.180740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.180773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.181125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.181154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.181564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.181593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.181954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.181984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.182348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.182376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.182753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.182787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.183122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.183151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.183540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.183568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.183984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.184013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.184350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.184387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.184767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.184794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.185116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.185146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.185466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.185494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.185848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.185876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.186235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.186263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.186619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.186646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.186996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.187024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.187362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.187390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.187727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.187755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.837 qpair failed and we were unable to recover it. 00:38:30.837 [2024-10-01 15:56:10.188115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.837 [2024-10-01 15:56:10.188145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.188514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.188542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.188776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.188803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.189137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.189167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.189532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.189560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.189918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.189946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.190190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.190221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.190589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.190617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.190963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.190990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.191326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.191353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.191713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.191741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.192075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.192104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.192441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.192468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.192825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.192853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.193200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.193229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.193574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.193603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.194031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.194061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.194390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.194419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.194739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.194767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.195091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.195121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.195460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.195487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.195813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.195841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.196191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.196218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.196552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.196579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.196931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.196960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.197316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.197344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.197684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.197718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.198075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.198105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.198394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.198422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.198712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.198740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.199103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.199132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.199478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.199506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.199874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.199924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.200256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.200284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.200661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.200689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.201109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.201138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.201470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.201498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.201843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.201871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.202228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.202256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.202589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.202617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.202965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.202995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.203333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.203361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.203710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.203738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.204071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.204099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.204460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.204488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.204842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.204869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.205234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.205262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.205630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.205658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.205991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.206020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.206418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.206446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.206780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.206808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.207169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.207198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.207549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.207577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.207924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.207953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.208310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.208338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.208678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.838 [2024-10-01 15:56:10.208706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.838 qpair failed and we were unable to recover it. 00:38:30.838 [2024-10-01 15:56:10.209058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.209086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.209326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.209353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.209713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.209740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.210077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.210106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.210339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.210366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.210727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.210758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.211095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.211125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.211463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.211490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.211732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.211764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.212135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.212166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.212396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.212430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.212668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.212697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.213071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.213101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.213308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.213339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.213648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.213677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.213917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.213948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.214288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.214316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.214616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.214644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.215007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.215036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.215369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.215397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.215719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.215747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.215975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.216008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.216271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.216299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.216661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.216689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.217026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.217054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.217410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.217439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.217672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.217702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.218002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.218031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.218349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.218378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.218726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.218754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.219097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.219127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.219470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.219498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.219832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.219861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.220208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.220238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.220559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.220594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.220969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.220999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.221331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.221359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.221755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.221784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.222027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.222055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.222366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.222394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.222744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.222772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.223137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.223166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.223498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.223526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.223862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.223890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.224249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.224278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.224652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.224680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.224934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.224963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.225189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.225220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.225561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.225589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.225815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.225846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.226089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.226124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.226341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.226367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.226727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.226754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.227079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.227109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.227469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.227497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.227839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.227867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.228102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.228131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.228381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.228413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.228738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.228767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.229160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.229191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.229432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.839 [2024-10-01 15:56:10.229459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.839 qpair failed and we were unable to recover it. 00:38:30.839 [2024-10-01 15:56:10.229803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.229832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.229966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.229994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.230357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.230385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.230726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.230755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.231111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.231141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.231396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.231422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.231780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.231808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.232165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.232194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.232543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.232570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.232923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.232952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.233282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.233310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.233647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.233675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.234037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.234065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.234290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.234321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.234702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.234732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.234950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.234979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.235343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.235376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.235740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.235769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.236173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.236203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.236416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.236447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.236674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.236703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.237058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.237087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.237450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.237478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.237834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.237862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.238209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.238238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.238570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.238599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.238931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.238960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.239196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.239223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.239617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.239645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.239873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.239923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.240270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.240300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.240654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.240683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.241003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.241033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.241359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.241387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.241713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.241741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.241967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.241997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.242340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.242368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.242729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.242757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.243092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.243122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.243355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.243382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.243731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.243759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.244002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.244032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.244394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.244422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.244792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.244820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.245174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.245203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.245554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.245582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.245957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.245987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.246309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.246336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.246556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.246584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:30.840 [2024-10-01 15:56:10.246924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.840 [2024-10-01 15:56:10.246954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:30.840 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.247328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.247358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.247566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.247595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.247962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.247991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.248344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.248372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.248715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.248743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.249092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.249121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.249454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.249489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.249832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.249860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.250230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.250258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.250580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.250609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.250849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.250877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.251249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.251278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 15:56:10.251637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 15:56:10.251666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.252002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.252031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.252421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.252450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.252792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.252821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.253176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.253205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.253558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.253586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.253805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.253836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.254211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.254242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.254586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.254616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.254952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.254981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.255227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.255255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.255464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.255495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.255861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.255889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.256238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.256267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.256516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.256544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.256929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.256959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.257336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.257365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.257695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.257725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.257992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.258025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.258371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.258398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.258770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.258799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.259167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.259202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.259547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.259575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.259917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.259946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.260182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.260209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.260563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.260594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.260953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.260982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.261350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.261380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.261595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.261625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.261937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.261968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.262313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.262344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.262694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.262722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.263074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.263104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.263452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.263481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.263834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.263868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.264265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.264295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.264651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.264682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.265053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.265083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.265417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.265446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.265789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.265818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.266078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.266109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.266457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.266486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.266817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.266846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.266988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.267019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.267340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.267368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.267732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.267761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.267985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.268017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.268365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.268392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.268742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.268771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.269135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.269164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.269498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.269527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.269753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.269781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.270120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.270150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.270505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.270533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.270927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.270956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.271287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.271314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.271569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.271597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.271943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.271973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.272328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.272357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.272771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.272800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.273141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.273171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.273508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 15:56:10.273538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 15:56:10.273888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.273929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.274249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.274276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.274631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.274659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.274930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.274960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.275313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.275341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.275570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.275598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.275938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.275967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.276284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.276311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.276673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.276700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.277034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.277064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.277417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.277446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.277699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.277727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.278105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.278139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.278345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.278375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.278681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.278710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.279121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.279150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.279465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.279493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.279741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.279769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.280128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.280157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.280553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.280581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.280883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.280918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.281298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.281327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.281615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.281643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.281883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.281924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.282169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.282198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.282544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.282572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.282959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.282990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.283367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.283395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.283751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.283780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.284106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.284134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.284474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.284503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.284861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.284889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.285120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.285148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.285483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.285512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.285839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.285867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.286252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.286281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.286626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.286654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.286915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.286944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.287194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.287223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.287457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.287486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.287683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.287712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.288038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.288067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.288439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.288467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.288876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.288921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.289139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.289167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.289527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.289555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.289905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.289935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.290265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.290293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.290645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.290673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.291034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.291063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.291400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.291428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.291743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.291772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.292134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.292168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.292484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.292513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.292886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.292924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.293281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.293309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.293548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.293576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.293910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.293939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 15:56:10.294165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 15:56:10.294192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.294524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.294552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.294907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.294936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.295224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.295252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.295590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.295618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.295988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.296017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.296364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.296391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.296762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.296790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.299146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.299206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.299502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.299533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.299873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.299915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.300268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.300296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.300653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.300681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.301034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.301064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.301390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.301419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.301840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.301868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.302242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.302270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.302503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.302531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.302749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.302780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.303105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.303135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.303391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.303417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.303786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.303815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.304160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.304189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.304536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.304563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.304779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.304811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.305154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.305185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.305513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.305542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.305884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.305923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.306165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.306195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.306532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.306560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.306773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.306803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.307126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.307155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.307525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.307553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.307878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.307914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.308305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.308340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.308666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.308694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.309038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.309068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.309404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.309431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.309766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.309792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.310151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.310180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.310522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.310549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.310916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.310945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.311205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.311232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.311510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.311537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.311888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.311924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.312179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.312209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.312514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.312542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.312885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.312925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.313276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.313303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.313731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.313759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.314116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.314144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.314493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.314521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.314904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.314934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.315161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 15:56:10.315192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 15:56:10.315523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.315551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.315885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.315922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.316195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.316225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.316443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.316473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.316852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.316881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.317236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.317269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.317628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.317657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.317992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.318023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.318370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.318398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.318772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.318801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.319177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.319206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.319547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.319576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.319931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.319964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.320296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.320323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.320546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.320573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.320839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.320867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.321094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c66260 is same with the state(6) to be set 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 [2024-10-01 15:56:10.322065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Write completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 Read completed with error (sct=0, sc=8) 00:38:31.117 starting I/O failed 00:38:31.117 [2024-10-01 15:56:10.322633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:31.117 [2024-10-01 15:56:10.322923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.322958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.323387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.323464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.323881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.323929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.324247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.324321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.324711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.324741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.325187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.325264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.325652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.325683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.325950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.325986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.326332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.326356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.326684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.326707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.326917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.326947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.327224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.327258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.327605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.327633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.327892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.327938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.328288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.328317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.328628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.328656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.328850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.328877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.329222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.329263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.329606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.329634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.329925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.329955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.330322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.330350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.330464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.330494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.330755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.330784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.331137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 15:56:10.331166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 15:56:10.331549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.331577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.331926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.331954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.332284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.332312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.332599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.332627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.332941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.332969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.333201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.333228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.333599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.333626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.333962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.333996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.334323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.334352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.334704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.334732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.335119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.335149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.335474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.335502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.335838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.335866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.336115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.336142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.336480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.336508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.336820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.336848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.337196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.337225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.337603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.337631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.338052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.338081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.338315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.338342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.338674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.338707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.339073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.339102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.339511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.339539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.339863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.339891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.340230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.340259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.340608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.340636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.340935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.340964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.341308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.341336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.341682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.341709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.342063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.342092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.342435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.342463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.342811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.342839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.343083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.343111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.343489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.343517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.343864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.343903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.344235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.344263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.344631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.344659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.344885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.344922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.345131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.345159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.345522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.345548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.345864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.345915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.346292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.346320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.346576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.346603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.346835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.346862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.347206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.347235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.347378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.347405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.347626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.347653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.347839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.347872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.348198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.348227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.348561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.348588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.348929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.348957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.349192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.349220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.349559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.349586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.349955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.349984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.350363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.350391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.350743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.350770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.351187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.351215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.351638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.351665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.351983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.352011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.352369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.352396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.352724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.352753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.353113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.353143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 15:56:10.353364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 15:56:10.353391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.353705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.353732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.354065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.354094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.354443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.354470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.354764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.354792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.354961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.354990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.355331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.355358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.355705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.355732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.356065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.356092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.356428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.356456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.356804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.356832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.357170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.357201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.357516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.357543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.357910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.357941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.358292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.358321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.358672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.358701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.359088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.359116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.359447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.359474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.359816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.359844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.360168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.360196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.360445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.360472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.360773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.360800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.361154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.361182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.361389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.361416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.361682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.361709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.362023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.362052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.362278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.362306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.362646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.362673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.363016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.363045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.363370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.363398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.363751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.363778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.364119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.364147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.364483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.364510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.364815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.364844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.365163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.365192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.365591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.365619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.366035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.366064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.366414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.366440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.366798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.366824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.367149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.367178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.367528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.367556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.367876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.367911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.368261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.368288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.368659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.368686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.369019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.369046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.369404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.369431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.369764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.369791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.370072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.370102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.370460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.370487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.370832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.370859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.371186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.371215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.371560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.371587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.371940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.371974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.372314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.372348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.372682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.372710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.373033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.373061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.373391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.373418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.373763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.373790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.374125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.119 [2024-10-01 15:56:10.374154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.119 qpair failed and we were unable to recover it. 00:38:31.119 [2024-10-01 15:56:10.374500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.374528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.374765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.374792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.375123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.375151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.375512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.375540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.375874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.375908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.376262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.376290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.376619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.376647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.376999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.377027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.377386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.377414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.377767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.377795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.378035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.378064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.378401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.378428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.378754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.378782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.379132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.379160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.379387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.379414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.379743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.379771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.380125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.380153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.380405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.380435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.380765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.380793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.381153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.381183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.381538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.381567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.381928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.381963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.382318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.382345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.382700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.382728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.383100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.383128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.383482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.383509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.383860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.383889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.384211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.384238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.384579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.384607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.384952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.384980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.385249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.385276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.385627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.385654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.385998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.386032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.386295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.386322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.386543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.386571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.386939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.386967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.387295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.387322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.387663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.387689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.388031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.388060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.388298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.388326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.388629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.388657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.388892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.388999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.389344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.389372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.389711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.389738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.390159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.390188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.390543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.390571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.390908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.390937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.391177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.391204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.391576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.391603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.391965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.391993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.392421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.392449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.392769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.392797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.394355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.394414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.394812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.394846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.395191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.395221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.395560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.395587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.395928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.395957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.396231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.396258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.396610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.396638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.396979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.397008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.397349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.397377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.397771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.397799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.120 qpair failed and we were unable to recover it. 00:38:31.120 [2024-10-01 15:56:10.398064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.120 [2024-10-01 15:56:10.398094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.398445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.398473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.398824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.398852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.399231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.399260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.399547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.399574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.399919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.399948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.400289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.400317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.400657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.400685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.401019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.401048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.401415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.401444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.401731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.401761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.402126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.402155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.402373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.402403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.402748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.402777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.402937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.402965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.403227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.403255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.403570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.403599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.404002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.404032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.404323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.404350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.404681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.404708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.405045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.405073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.405451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.405478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.405789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.405817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.406159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.406189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.406535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.406562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.406911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.406940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.407278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.407307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.407532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.407564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.407759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.407786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.408156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.408185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.408562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.408590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.408917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.408947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.409202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.409234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.409556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.409584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.409969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.410000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.410343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.410371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.410610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.410641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.410878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.410924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.411285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.411312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.411666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.411693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.411975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.412005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.412360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.412389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.412733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.412761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.413030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.413058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.413402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.413429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.413799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.413826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.414032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.414059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.414408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.414435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.414729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.414757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.415090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.415120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.415440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.415468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.415762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.415789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.416124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.416154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.416501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.416529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.416817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.416851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.417103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.417132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.417476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.417503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.417837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.417865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.418208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.418236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.418506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.121 [2024-10-01 15:56:10.418533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.121 qpair failed and we were unable to recover it. 00:38:31.121 [2024-10-01 15:56:10.418877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.418912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.419258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.419286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.419643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.419670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.420021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.420051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.420398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.420425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.420646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.420673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.421028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.421057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.421413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.421442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.421786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.421813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.422033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.422062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.422291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.422318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.422678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.422705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.423036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.423065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.423312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.423340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.423699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.423727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.424063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.424092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.424435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.424462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.424816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.424844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.425189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.425218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.425549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.425576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.425957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.425986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.426322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.426355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.426672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.426700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.427062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.427091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.427327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.427353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.427591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.427618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.427956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.427984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.428362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.428391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.428716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.428744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.429093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.429121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.429492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.429520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.429849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.429876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.430149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.430177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.430502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.430530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.430797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.430824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.431172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.431201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.431535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.431563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.431917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.431946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.432298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.432325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.432754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.432781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.433108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.433137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.433487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.433514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.433848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.433876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.434115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.434142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.434452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.434480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.434781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.434809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.435144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.435174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.435508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.435536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.435790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.435820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.436158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.436188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.436542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.436570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.436908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.436937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.437271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.437299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.437658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.437686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.437923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.122 [2024-10-01 15:56:10.437951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.122 qpair failed and we were unable to recover it. 00:38:31.122 [2024-10-01 15:56:10.438282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.438309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.438666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.438694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.439056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.439084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.439397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.439425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.439779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.439807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.440094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.440123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.440443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.440471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.440817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.440845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.441210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.441239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.441559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.441593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.441842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.441870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.442237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.442266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.442515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.442543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.442862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.442890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.443131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.443158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.443477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.443504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.443817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.443844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.444220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.444249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.444584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.444612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.444973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.445002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.445363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.445391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.445731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.445759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.446096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.446124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.446456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.446484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.446841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.446868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.447154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.447181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.447526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.447554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.447906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.447935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.448257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.448292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.448497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.448526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.448745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.448773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.448996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.449030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.449364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.449392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.449737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.449765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.450109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.450143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.450522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.450550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.450952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.450981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.451359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.451388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.451718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.451746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.452084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.452112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.452442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.452470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.452815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.452843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.453171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.453200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.453539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.453567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.453918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.453947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.454276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.454305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.454645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.454672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.455010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.455038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.455403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.455433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.455684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.455711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.455941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.455969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.456206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.456234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.456608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.456637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.456995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.457023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.457369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.457396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.457719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.457747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.458108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.458138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.458415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.458442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.458750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.458777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.459085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.459113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.459342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.459369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.459613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.459647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.460007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.123 [2024-10-01 15:56:10.460036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.123 qpair failed and we were unable to recover it. 00:38:31.123 [2024-10-01 15:56:10.460360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.460387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.460643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.460671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.460975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.461004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.461335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.461363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.461741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.461769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.462177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.462207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.462451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.462478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.462707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.462735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.462967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.462996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.463362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.463389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.463719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.463747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.464046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.464075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.464447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.464475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.464802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.464830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.465211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.465241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.465569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.465598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.465940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.465969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.466324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.466352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.466693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.466721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.466969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.466997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.467251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.467278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.467525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.467557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.467888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.467925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.468199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.468227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.468579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.468607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.468946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.468975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.469155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.469183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.469522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.469550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.469767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.469794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.470047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.470077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.470347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.470374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.470754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.470782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.470985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.471014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.471242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.471269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.471631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.471659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.472032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.472061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.472479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.472507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.472851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.472879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.473139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.473167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.473403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.473433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.473754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.473783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.475345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.475399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.475765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.475794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.476047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.476080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.476414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.476444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.476661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.476691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.477048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.477078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.477298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.477329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.477683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.477712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.478056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.478085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.478453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.478481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.478799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.478828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.479186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.479215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.479554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.479584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.479929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.479958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.480279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.480308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.480677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.480706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.481054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.481083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.481416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.481444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.481696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.481723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.482060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.124 [2024-10-01 15:56:10.482090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.124 qpair failed and we were unable to recover it. 00:38:31.124 [2024-10-01 15:56:10.482442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.482471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.482691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.482719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.483062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.483091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.483441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.483470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.483714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.483742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.484089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.484124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.484470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.484499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.484840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.484868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.485281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.485311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.485528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.485555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.485891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.485930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.486282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.486310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.486656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.486684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.486994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.487022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.487379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.487407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.487625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.487656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.488226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.488263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.488611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.488646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.488998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.489027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.489377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.489406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.489722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.489750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.490078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.490107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.490345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.490371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.490603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.490631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.491003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.491032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.491328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.491356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.491690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.491718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.492036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.492067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.492428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.492455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.492799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.492826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.493188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.493217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.493600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.493628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.493828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.493861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.494265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.494295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.494508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.494535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.494902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.494931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.495201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.495229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.495462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.495490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.495832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.495860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.496269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.496299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.496626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.496655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.496904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.496934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.497272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.497301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.497641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.497668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.498020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.498049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.498348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.498376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.498576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.498604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.498972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.499001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.499354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.499382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.499721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.499749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.499974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.500005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.500151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.500178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.500535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.500564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.500852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.500882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.501236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.501266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.125 qpair failed and we were unable to recover it. 00:38:31.125 [2024-10-01 15:56:10.501501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.125 [2024-10-01 15:56:10.501531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.501761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.501790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.502045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.502075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.502417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.502445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.502666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.502701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.502930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.502959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.503166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.503194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.503567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.503595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.503943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.503971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.504310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.504338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.504686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.504714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.504927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.504955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.505192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.505227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.505569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.505597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.505953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.505983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.506213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.506240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.506580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.506608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.506945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.506974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.507322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.507352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.507669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.507698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.508053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.508083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.508408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.508437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.508773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.508801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.509157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.509187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.509531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.509560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.509859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.509888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.510235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.510264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.510587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.510615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.510969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.510999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.511302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.511331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.511675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.511703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.512085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.512114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.512475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.512504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.512845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.512873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.513243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.513273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.513475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.513502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.513858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.513886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.514239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.514267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.514628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.514656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.515006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.515035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.515413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.515442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.515800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.515828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.516107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.516135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.516463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.516491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.516848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.516876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.517127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.517156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.517449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.517477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.517699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.517730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.518071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.518101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.518420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.518448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.518661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.518690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.519060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.519090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.519408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.519436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.519808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.519836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.520180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.520208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.520582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.520610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.520962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.520992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.521252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.521279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.521529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.521558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.521937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.521967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.522322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.522350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.522698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.522725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.522976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.523006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.523346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.523374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.523688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.126 [2024-10-01 15:56:10.523716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.126 qpair failed and we were unable to recover it. 00:38:31.126 [2024-10-01 15:56:10.524078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.524107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.524443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.524470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.524832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.524861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.525231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.525261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.525611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.525639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.526034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.526062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.526396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.526424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.526764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.526797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.527048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.527077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.527415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.527443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.527650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.527677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.528038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.528067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.528419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.528446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.528870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.528905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.529271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.529299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.529555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.529583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.529923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.529951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.530302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.530330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.530563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.530591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.530814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.530843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.531197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.531227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.531570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.531598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.531944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.531973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.532324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.532352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.532671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.532699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.533026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.533055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.533428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.533456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.533799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.533827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.534160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.534189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.534526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.534554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.534911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.534941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.535288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.535316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.535657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.535685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.536036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.536065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.536427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.536460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.536843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.536871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.537192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.537222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.537590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.537619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.537928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.537959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.538312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.538340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.538690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.538718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.538939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.538967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.539080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.539106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.539327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.539354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.539619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.539647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.540025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.540054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.540371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.540399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.540751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.540779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.541121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.541150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.541370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.541400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.541747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.541774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.542006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.542035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.542375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.542403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.542708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.542736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.543094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.543123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.543446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.543475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.543766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.543793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.544130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.544159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.544480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.544510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.544851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.127 [2024-10-01 15:56:10.544879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.127 qpair failed and we were unable to recover it. 00:38:31.127 [2024-10-01 15:56:10.545251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.545280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.545628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.545656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.546003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.546032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.546401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.546429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.546781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.546810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.547171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.547200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.547532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.547560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.547913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.547942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.548281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.548309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.548640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.548668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.549014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.549043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.549394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.549421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.549796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.549825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.550201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.550229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.550552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.550579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.550875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.550912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.551243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.551271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.551614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.551641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.551986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.552016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.552356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.552384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.552690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.552718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.552942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.552970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.553377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.553404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.553633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.553660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.554012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.554041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.554382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.554409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.128 [2024-10-01 15:56:10.554731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.128 [2024-10-01 15:56:10.554761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.128 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.556931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.556990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.557362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.557392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.557741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.557770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.558132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.558163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.558510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.558538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.558787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.558815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.559202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.559231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.559570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.559598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.559826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.559854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.560195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 15:56:10.560224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 15:56:10.560566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.560595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.560824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.560852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.561197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.561228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.561576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.561604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.561943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.561973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.562391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.562425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.562750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.562779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.563131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.563160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.563404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.563431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.563566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.563596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.563930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.563959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.564347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.564375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.564724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.564753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.565047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.565076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.565416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.565444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.565772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.565799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.566154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.566183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.566557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.566585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.566838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.566865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.567236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.567265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.567572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.567601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.567929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.567958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.568330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.568358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.568705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.568734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.569070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.569099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.569463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.569491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.569836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.569865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.570191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.570220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.570577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.570605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.570957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.570986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.571300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.571328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.571498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.571526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.571736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.571769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.572124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.572153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.572473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.572501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.572828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.572855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.573073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.573103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.573441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.573469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.573653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.573684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.574014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.574044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.574284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.574312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.574674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.574702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.575073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.575101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.575439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.575468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.575685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.575713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.576061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.576093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.576400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.576428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.576693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.576720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.577072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.577101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.577454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.577482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.577797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.577824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.578055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.578083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.578452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.578479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.578802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.578828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.579145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.579174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.579504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.579532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.579867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.579904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.580217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.580247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.580574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.580602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.580974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.581010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.581343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.581372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.581688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 15:56:10.581716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 15:56:10.582072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.582101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.582444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.582471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.582843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.582871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.583233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.583263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.583609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.583636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.583970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.584000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.584312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.584340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.584686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.584714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.584866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.584924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.585275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.585304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.585628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.585657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.586011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.586042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.586288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.586316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.586617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.586644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.586888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.586926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.587248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.587276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.587422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.587448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.587809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.587837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.588162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.588191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.588542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.588571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.588927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.588955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.589276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.589304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.589601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.589629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.589992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.590023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.590370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.590398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.590748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.590777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.591131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.591159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.591532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.591561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.591889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.591928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.592278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.592305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.592583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.592609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.592944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.592974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.593277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.593306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.593641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.593669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.593998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.594027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.594362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.594389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.594725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.594753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.595118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.595147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.595388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.595416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.595794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.595821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.596212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.596242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.596559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.596586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.596805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.596832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.597184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.597214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.597561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.597589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.597931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.597960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.598283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.598312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.598536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.598566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.598939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.598970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.599298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.599325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.599651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.599680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.599979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.600009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.600388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.600417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.600654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.600682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.601019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.601050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.601361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.601390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.601732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.601760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.602012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.602041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.602388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.602417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.602792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.602822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.603139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 15:56:10.603169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 15:56:10.603415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.603443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.603761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.603789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.604116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.604145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.604496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.604526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.604785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.604818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.605035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.605065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.605414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.605443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.605773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.605801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.606120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.606150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.606463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.606494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.606818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.606847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.607231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.607261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.607617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.607647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.607960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.607991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.608367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.608395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.608734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.608763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.609100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.609129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.609469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.609498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.609871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.609920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.610262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.610292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.610533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.610562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.610891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.610930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.611263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.611291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.611639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.611669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.611985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.612015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.612337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.612368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.612719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.612748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.613131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.613161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.613471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.613499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.613834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.613863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.614189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.614219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.614561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.614596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.614888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.614927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.615278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.615307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.615725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.615753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.616100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.616129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.616540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.616568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.616914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.616944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.617327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.617355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.617716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.617745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.618085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.618114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.618487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.618515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.618855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.618882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.619243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.619270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.619574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.619602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.619844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.619871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.620245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.620275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.620572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.620600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.620939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.620969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.621327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.621355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.621694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.621723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.622068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.622098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.622431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.622459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.622810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.622838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 15:56:10.623202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 15:56:10.623231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.623585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.623613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.624024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.624053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.624399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.624426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.624764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.624792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.625130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.625160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.625498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.625525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.625874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.625914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.626246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.626274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.626599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.626627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.626970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.627000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.627340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.627367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.627709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.627736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.628083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.628114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.628458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.628485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.628827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.628855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.629193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.629222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.629452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.629479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.629799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.629828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.630157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.630187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.630483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.630511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.630854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.630882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.631109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.631142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.631458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.631486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.631727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.631755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.632072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.632102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.632452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.632480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.632822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.632850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.633196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.633225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.633447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.633475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.633814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.633842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.634194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.634223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.634456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.634485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.634693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.634725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.635065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.635095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.635397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.635426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.635616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.635644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.635915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.635944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.636308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.636337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.636609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.636636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.637013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.637042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.637261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.637293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.637677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.637706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.638053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.638082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.638429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.638458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.638871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.638916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.639234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.639263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.639616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.639644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.639928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.639957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.640316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.640344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.640658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.640687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.641032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.641062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.641386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.641414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.641789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.641817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.642056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.642085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.642343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.642372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.642617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.642646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.642967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.642996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.643327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.643355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.643654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.643682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.644014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.644043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.644401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.644429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 15:56:10.644762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 15:56:10.644790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.645055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.645084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.645417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.645445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.645782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.645810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.646061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.646090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.646400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.646429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.646649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.646677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.647001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.647030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.647335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.647364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.647703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.647731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.648081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.648116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.648469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.648496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.648870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.648937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.649356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.649385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.649619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.649649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.649986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.650015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.650242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.650271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.650611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.650639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.650984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.651013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.651251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.651280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.651631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.651660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.652008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.652036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.652388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.652415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.652755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.652784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.653077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.653107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.653437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.653466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.653803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.653830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.654145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.654174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.654572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.654600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.654929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.654958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.655212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.655239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.655582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.655611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.655959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.655988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.656363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.656390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.656724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.656753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.657104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.657133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.657456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.657483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.657824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.657857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.658210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.658241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.658581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.658610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.658951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.658983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.659322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.659350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.659718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.659745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.660018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.660046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.660356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.660384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.660754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.660783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.661218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.661247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.661574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.661602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.661932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.661961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.662188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.662221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.662471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.662502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.662830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.662860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.663217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.663247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.663472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.663499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.663859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.663887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.664255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.664283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.664612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.664641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.664884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.664923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.665247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.665275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.665577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.665604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.665941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.665986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.666324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.666357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.666714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.666743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.667079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.667113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.667458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.667488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.667828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.667857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.668188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 15:56:10.668217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 15:56:10.668555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.668583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.668823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.668850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.669200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.669229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.669655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.669683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.670006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.670036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.670385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.670414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.670745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.670773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.671163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.671191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.671532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.671561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.671928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.671960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.672303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.672331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.672653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.672680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.672998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.673027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.673279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.673306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.673658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.673686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.673973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.674001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.674345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.674372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.674698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.674728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.674963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.674996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.675312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.675339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.675680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.675708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.676033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.676063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.676394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.676422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.676763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.676791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.677136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.677166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.677419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.677447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.677752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.677780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.678132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.678163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.678507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.678535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.678885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.678924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.679247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.679275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.679614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.679642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.679953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.679982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.680257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.680284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.680637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.680666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.681001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.681031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.681303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.681331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.681677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.681704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.682071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.682105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.682454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.682481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.682906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.682937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.683304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.683332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.683691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.683719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.684072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.684099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.684439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.684468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.684846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.684874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.685215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.685244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.685619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.685648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.685991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.686020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.686337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.686365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.686716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.686744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.686980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.687009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.687368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.687397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.687736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.687767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.687921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.687954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.688290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.688319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.688670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.688697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.688934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.688962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 15:56:10.689346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 15:56:10.689374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.689723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.689751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.690061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.690090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.690434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.690462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.690822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.690850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.691191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.691221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.691580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.691608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.691963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.691997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.692342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.692369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.692725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.692753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.692976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.693005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.693353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.693381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.693728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.693756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.694006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.694036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.694382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.694411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.694751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.694780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.695127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.695156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.695387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.695414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.695728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.695756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.696020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.696049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.696393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.696421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.696777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.696806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.697118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.697148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.697485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.697514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.697921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.697968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.698292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.698320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.698651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.698681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.698992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.699021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.699399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.699427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.699775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.699803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.700157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.700185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.700523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.700551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.700889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.700928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.701249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.701284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.701666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.701694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.702037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.702070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.702414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.702442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.702818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.702847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.703217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.703248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.703587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.703615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.703960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.703989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.704334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.704361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.704712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.704739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.704945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.704975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.705322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.705350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.705698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.705728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.706027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.706056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.706410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.706437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.706775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.706803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.707154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.707184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.707411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.707438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.707856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.707884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.708246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.708277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.708638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.708666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.708915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.708943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.709330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.709357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.709679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.709708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.709989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.710019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.710372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.710406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.710755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.710783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.711134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.711165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.711486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.711514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 15:56:10.711905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 15:56:10.711935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.712261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.712290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.712514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.712542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.712841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.712870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.713304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.713334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.713674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.713702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.713947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.713978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.714333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.714361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.714710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.714740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.714970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.715001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.715232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.715260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.715504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.715532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.715864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.715903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.716175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.716209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.716602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.716632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.716967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.716998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.717385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.717414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.717798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.717826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.718190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.718220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.718579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.718608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.718831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.718860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.719134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.719165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.719538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.719567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.719914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.719945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.720074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.720105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.720621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.720715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.721245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.721339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.721685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.721725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.722208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.722301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.722741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.722780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.723153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.723184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.723544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.723573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.723935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.723965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.724192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.724222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.724585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.724615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.724833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.724863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.725301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.725331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.725688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.725718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.726079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.726109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.726476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.726505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.726844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.726874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.727265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.727295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.727654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.727683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.728054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.728084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.728361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.728388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.728752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.728781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.729017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.729047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.729390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.729422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.729737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.729765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.730083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.730115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.730475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.730505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.730830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.730859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.731178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.731209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.731589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.731624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.731951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.731980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.732340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.732368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.732721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.732751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.733115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.733145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.733504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.733533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.733869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.733906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.734166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.734193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.734415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.734442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.734777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.734809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 15:56:10.735150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 15:56:10.735181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.735539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.735569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.735957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.735986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.736230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.736259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.736648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.736678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.737024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.737054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.737268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.737296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.737677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.737706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.738049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.738083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.738423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.738451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.738805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.738833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.739253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.739283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.739512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.739543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.739917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.739949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.740310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.740339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.740671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.740700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.741062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.741092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.741325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.741357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.741731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.741762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.742114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.742144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.742511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.742540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.742811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.742839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.743106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.743135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.743508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.743536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.743919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.743950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.744315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.744344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.744735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.744763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.745114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.745142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.745486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.745515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.745874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.745913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.746137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.746173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.746579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.746607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.746961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.746992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.747344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.747373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.747728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.747756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.748116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.748147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.748506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.748536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.748889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.748949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.749288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.749316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.749566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.749595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.749947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.749976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.750355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.750384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.750629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.750657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.750986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.751017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.751361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.751389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.751743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.751770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.752026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.752055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.752431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.752459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.752853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.752882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.753164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.753199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.753556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.753585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.753816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.753847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.754211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.754242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.754608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.754637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.754974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.755003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.755350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.755379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.755759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.755788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.755964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.755995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.756340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.756369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.756751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.756780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 15:56:10.756991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 15:56:10.757020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.757356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.757385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.757643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.757671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.758012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.758041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.758409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.758437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.758696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.758722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.759048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.759077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.759324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.759352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.759708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.759737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.760008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.760038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.760360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.760394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.760791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.760820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.761168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.761198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.761573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.761604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.761927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.761956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.762313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.762342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.762601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.762630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.762962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.762991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.763303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.763332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.763724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.763754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.764135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.764165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.764555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.764583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.764930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.764958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.765303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.765331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.765581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.765609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.765835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.765868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.766231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.766260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.766505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.766533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.766768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.766797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.767160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.767189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.767558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.767586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.767925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.767955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.768308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.768335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.768673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.768702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.769055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.769086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.769331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.769360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.769649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.769678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.770060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.770096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.770326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.770354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.770716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.770744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.771100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.771130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.771495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.771522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.771742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.771770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.772107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.772138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.772484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.772513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.772910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.772940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.773287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.773325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.773663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.773691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.774132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.774161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.774498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.774527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.774769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.774798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.775179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.775209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.775563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.775593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.775962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.775991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.776341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.776369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.776693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.776720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.776979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.777007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.777359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.777387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.777744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.777771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.778138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.778169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.778531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.778559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.778910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.778939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 15:56:10.779187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 15:56:10.779214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.779609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.779636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.780063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.780093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.780310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.780340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.780694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.780722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.781005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.781033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.781298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.781326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.781669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.781697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.782040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.782069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.782426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.782454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.782808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.782835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.783077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.783110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.783433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.783462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.783826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.783854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.784114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.784143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.784503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.784538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.784762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.784790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.785153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.785183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.785400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.785430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.785789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.785817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.786182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.786211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.786489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.786517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.786875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.786916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.787249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.787277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.787585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.787612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.787962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.787991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.788360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.788388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.788756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.788783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.789120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.789150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.789512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.789540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.789903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.789934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.790195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.790226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.790562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.790591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.790814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.790845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.791037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.791067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.791440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.791468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.791795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.791823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.792097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.792125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.792516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.792543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.792885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.792940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.793282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.793310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.793666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.793694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.794058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.794095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.794336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.794363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.794674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.794702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.795055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.795084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.795451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.795479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.795838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.795865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.796263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.796292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.796659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.796687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.797043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.797072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.797463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.797490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.797837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.797865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.798120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.798149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.798471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.798499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.798860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.798905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.799113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.799143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.799511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.799539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.799880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.799920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.800274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 15:56:10.800303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 15:56:10.800663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.800691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.801051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.801080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.801436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.801464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.801805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.801834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.802179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.802208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.802566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.802593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.802958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.802987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.803227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.803254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.803591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.803619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.803970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.803999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.804380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.804408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.804616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.804645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.804999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.805030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.805368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.805398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.805806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.805835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.806189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.806218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.806568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.806596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.806837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.806867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.807242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.807270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.807612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.807641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.808009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.808038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.808457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.808484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.808837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.808865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.809229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.809259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.809592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.809621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.809976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.810007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.810354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.810382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.810737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.810764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.811133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.811163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.811526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.811560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.811924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.811954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.812313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.812342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.812708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.812738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.813102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.813130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.813485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.813513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.813867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.813915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.814257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.814285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.814664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.814692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.815047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.815077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.815437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.815465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.815811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.815839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.816199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.816228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.816645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.816674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.816913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.816944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.817273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.817301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.817663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.817691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.818055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.818085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.818462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.818489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.818743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.818771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.819167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.819196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.819552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.819580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.819955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.819983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.820215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.820245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.820610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.820637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.820998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.821026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.821301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.821331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.821703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.821732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.822107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.822137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.822473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.822501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.822726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.822754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.823113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.823144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.823458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.823487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.823814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.823843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.824282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.824312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.824660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.824689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.825017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.825046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.825402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.825430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.825839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 15:56:10.825867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 15:56:10.826238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.826269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.826593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.826621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.826948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.826977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.827331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.827358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.827715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.827742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.828090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.828119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.828420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.828448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.828801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.828836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.829095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.829127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.829481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.829509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.829937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.829967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.830320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.830347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.830741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.830768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.831125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.831154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.831542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.831569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.831982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.832011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.832241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.832268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.832624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.832652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.833015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.833043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.833385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.833413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.833765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.833793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.834164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.834193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.834553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.834581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.834946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.834976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.835336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.835366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.835714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.835743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.836122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.836152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.836395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.836425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.836754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.836781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.837184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.837212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.837531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.837560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.837917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.837947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.840468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.840534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.840929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.840967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.841345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.841375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.841726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.841754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.842083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.842113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.842467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.842495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.842860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.842888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.843248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.843280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 15:56:10.843627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 15:56:10.843655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.844015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.844048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.844396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.844424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.844761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.844788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.845181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.845211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.845579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.845608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.845863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.845908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.846264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.846299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.846654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.846683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.847059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.847088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.847445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.847474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.847835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.847864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.848200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.848229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.848586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.848614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.848857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.848886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.849278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.849308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.849566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.849595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.849828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.849861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.850249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.850280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.850671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.850699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.851053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.851083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.851442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.851470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.851688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.851719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.852080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.852124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.852516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.852545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.852916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.852948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.853324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.853352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.853711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.853739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.854082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.854114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.854469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.854497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.854855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.854883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.855247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.855277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.855642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.855670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.856007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.856036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.856378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.856406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.856779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.856806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.857151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.857180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.857569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.857596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.857959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.857988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.858350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.858378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.858732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.858760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.859114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.859143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.859546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.859573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.859950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.859986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.860333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.860361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.860613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.860643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.860929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.860960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.861324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.861359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.861719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.861750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.861985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.862017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.862241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.862272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.862658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.862688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.863030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.863059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.863424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.863451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.863808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.863838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.864241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.864271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.864627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.864655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.864918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.864947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.865169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.865200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.865551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.865579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.865937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.865968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.866322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.866350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.866723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.685 [2024-10-01 15:56:10.866751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.685 qpair failed and we were unable to recover it. 00:38:31.685 [2024-10-01 15:56:10.867121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.867151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.867499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.867528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.867889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.867931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.868293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.868320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.868686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.868715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.869086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.869114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.869484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.869513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.869873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.869914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.870270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.870298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.870643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.870671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.871013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.871041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.871375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.871403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.871749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.871778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.872153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.872182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.872538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.872566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.872925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.872953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.873294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.873323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.873675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.873702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.874074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.874104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.874467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.874495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.874848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.874874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.875241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.875270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.875633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.875661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.876029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.876060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.876423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.876458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.876821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.876849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.877217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.877247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.877608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.877637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.878001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.878030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.878397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.878424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.878789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.878818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.879193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.879224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.879606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.879634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.880005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.880034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.880326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.880353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.880720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.880748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.881123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.881154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.881509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.881538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.881965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.881995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.882361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.882390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.882617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.882648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.883003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.883034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.883395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.883424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.883794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.883823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.884188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.884218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.884583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.884610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.884981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.885011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.885365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.885394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.885753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.885781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.886102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.886131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.886496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.886524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.886909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.886939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.887289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.887318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.887683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.887710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.888069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.888098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.888458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.888488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.888843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.888871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.889240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.889269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.889656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.889685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.890047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.890076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.890453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.890481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.890874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.890927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.891273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.686 [2024-10-01 15:56:10.891302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.686 qpair failed and we were unable to recover it. 00:38:31.686 [2024-10-01 15:56:10.891642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.891669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.892002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.892038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.892370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.892398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.892758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.892787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.893043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.893076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.893402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.893431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.893766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.893795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.894175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.894204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.894574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.894601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.894966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.894995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.895389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.895418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.895786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.895815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.896064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.896092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.896456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.896486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.896859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.896888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.897263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.897293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.897662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.897693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.898050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.898080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.898299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.898329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.898684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.898713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.899093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.899124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.899374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.899402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.899759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.899789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.900138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.900167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.900529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.900557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.900926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.900955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.901297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.901324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.901666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.901694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.902053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.902084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.902359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.902387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.902714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.902742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.903095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.903125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.903491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.903518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.903884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.903927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.904290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.904319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.904701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.904730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.905095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.905125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.905511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.905539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.905773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.905805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.906073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.906104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.906490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.906520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.906876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.906924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.907347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.907376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.907628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.907657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.908025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.908054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.908418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.908446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.908814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.908842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.909286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.687 [2024-10-01 15:56:10.909316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.687 qpair failed and we were unable to recover it. 00:38:31.687 [2024-10-01 15:56:10.909695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.909723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.910080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.910110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.910482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.910509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.910873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.910927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.911294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.911324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.911698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.911732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.912130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.912161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.912526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.912554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.912929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.912958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.913324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.913353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.913741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.913771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.914126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.914156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.914513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.914541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.914907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.914935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.915290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.915318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.915545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.915574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.915929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.915958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.916309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.916339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.916600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.916629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.916990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.917026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.917309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.917339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.917701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.917730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.918075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.918104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.918482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.918510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.918889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.918940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.919286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.919313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.919716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.919744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.920120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.920149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.920540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.920567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.920911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.920941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.921304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.921332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.921717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.921746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.921994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.922027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.922410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.922443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.922818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.922846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.923194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.923223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.923590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.923618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.923861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.923891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.924261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.924290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.924666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.924693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.925069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.925097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.925341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.925372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.925721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.925751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.926142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.926173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.926564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.926593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.926960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.926990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.927348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.927377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.927742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.927770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.928151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.928181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.928544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.928572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.928929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.928958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.929315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.929343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.929701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.929729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.930110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.930140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.930501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.930529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.930801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.930828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.931208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.931238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.931591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.931619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.931965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.931994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.932352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.932381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.688 [2024-10-01 15:56:10.932749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.688 [2024-10-01 15:56:10.932777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.688 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.933156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.933184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.933477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.933505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.933792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.933819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.934202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.934232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.934597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.934626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.934999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.935028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.935402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.935431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.935770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.935798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.936145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.936174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.936563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.936590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.936838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.936866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.937303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.937334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.937732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.937766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.938134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.938164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.938525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.938553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.938928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.938957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.939330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.939359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.939685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.939713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.940041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.940070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.940315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.940342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.940702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.940730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.941087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.941116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.941464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.941493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.941725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.941757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.942104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.942134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.942490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.942519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.942866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.942906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.943222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.943250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.943593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.943622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.944005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.944034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.944401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.944430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.944770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.944798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.945148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.945177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.945546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.945573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.945832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.945859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.946144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.946174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.946542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.946570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.946950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.946981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.947339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.947368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.947717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.947745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.948109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.948138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.948524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.948552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.948996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.949025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.949393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.949423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.949794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.949822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.950174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.950204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.950576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.950604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.951005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.951034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.951417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.951448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.951816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.951845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.952235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.952265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.952493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.952523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.952916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.952951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.953310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.953338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.953742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.953770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.954040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.954069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.954395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.954423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.954789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.954817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.955199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.955228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.955609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.955637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.956001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.956036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.956427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.689 [2024-10-01 15:56:10.956457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.689 qpair failed and we were unable to recover it. 00:38:31.689 [2024-10-01 15:56:10.956826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.956855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.957228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.957258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.957495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.957522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.957917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.957947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.958320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.958350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.958704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.958733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.959061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.959091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.959467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.959495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.959937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.959967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.960342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.960371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.960630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.960663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.961015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.961047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.961301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.961334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.961735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.961766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.962161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.962194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.962566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.962595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.962815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.962846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.963226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.963263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.963607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.963646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.963876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.963921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.964296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.964326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.964679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.964706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.964965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.964998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.965365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.965396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.965751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.965779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.966078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.966108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.966477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.966506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.966774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.966803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.967145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.967173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.967557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.967587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.967943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.967974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.968371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.968404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.968761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.968788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.969195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.969226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.969620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.969649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.970014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.970046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.970386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.970413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.970776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.970806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.971148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.971176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.971547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.971576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.971939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.971968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.972314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.972342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.972721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.972749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.973113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.973142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.973497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.973526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.973908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.973940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.974303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.974331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.974672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.974701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.975054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.975084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.975429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.975457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.975809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.975838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.976198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.976228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.976588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.976618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.976981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.977011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.977254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.690 [2024-10-01 15:56:10.977282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.690 qpair failed and we were unable to recover it. 00:38:31.690 [2024-10-01 15:56:10.977643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.977671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.978039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.978069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.978436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.978469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.978832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.978860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.979266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.979296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.979655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.979685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.980076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.980105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.980485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.980514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.980759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.980788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.981037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.981068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.981450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.981479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.981815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.981844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.982234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.982265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.982517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.982547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.982904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.982933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.983315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.983343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.983787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.983817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.984166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.984195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.984334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.984364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.984712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.984741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.985106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.985136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.985462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.985490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.985740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.985768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.986195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.986225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.986604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.986639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.986977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.987007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.987344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.987372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.987713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.987741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.987997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.988031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.988433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.988462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.988816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.988843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.989311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.989342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.989766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.989795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.990161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.990191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.990556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.990584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.990949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.990979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.991349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.991377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.991736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.991765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.991986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.992019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.992275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.992302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.992673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.992702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.993070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.993107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.993465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.993500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.993872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.993915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.994261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.994290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.994668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.994697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.995052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.995081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.995423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.995453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.995804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.995832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.996174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.996205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.996554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.996582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.996954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.996983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.997359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.997389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.997746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.997784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.998154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.998184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.998411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.998442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.691 [2024-10-01 15:56:10.998819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.691 [2024-10-01 15:56:10.998848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.691 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:10.999256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:10.999287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:10.999636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:10.999664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.000035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.000067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.000359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.000388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.000747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.000777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.001151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.001181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.001551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.001580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.001830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.001863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.002227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.002255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.002637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.002667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.002915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.002945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.003323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.003351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.003703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.003732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.004081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.004111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.004486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.004515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.004881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.004928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.005317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.005347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.005717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.005747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.006138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.006168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.006551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.006581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.006928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.006957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.007337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.007367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.007749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.007777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.008146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.008176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.008554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.008582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.008948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.008985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.009207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.009238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.009618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.009648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.009998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.010027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.010463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.010493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.010875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.010917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.011273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.011303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.011668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.011696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.012100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.012133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.012376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.012407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.012777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.012805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.013031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.013060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.013491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.013519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.013764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.013793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.014144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.014174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.014537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.014567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.014949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.014979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.015357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.015387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.015760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.015789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.016056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.016087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.016333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.016364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.016725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.016757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.017024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.017056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.017413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.017441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.017780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.017809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.017965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.017996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.018384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.018415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.018782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.018811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.019176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.019205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.019570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.019599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.019970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.020002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.020368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.020396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.020790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.692 [2024-10-01 15:56:11.020819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.692 qpair failed and we were unable to recover it. 00:38:31.692 [2024-10-01 15:56:11.021179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.021209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.021454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.021482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.021832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.021859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.022211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.022240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.022606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.022634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.023035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.023063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.023430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.023458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.023827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.023861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.024241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.024271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.024623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.024651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.025009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.025038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.025311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.025338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.025702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.025729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.026071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.026101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.026491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.026519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.026833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.026862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.027275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.027304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.027536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.027563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.027934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.027963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.028343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.028371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.028759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.028788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.029148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.029178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.029556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.029585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.029810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.029842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.030101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.030133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.030482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.030511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.030912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.030942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.031299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.031326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.031709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.031736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.031975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.032003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.032280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.032310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.032666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.032693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.033058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.033088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.033311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.033341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.033724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.033752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.034106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.034136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.034398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.034426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.034791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.034820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.035213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.035242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.035597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.035624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.035861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.035888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.036273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.036301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.036675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.036702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.036970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.036999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.037362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.037390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.037758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.037786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.038164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.038193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.038569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.038603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.038956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.038984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.039354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.039383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.039769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.039796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.040172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.040200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.693 [2024-10-01 15:56:11.040563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.693 [2024-10-01 15:56:11.040590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.693 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.040944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.040973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.041342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.041370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.041487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.041516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.041772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.041800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.042158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.042187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.042540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.042570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.042956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.042984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.043296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.043325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.043700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.043728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.043995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.044023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.044264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.044294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.044650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.044679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.045023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.045053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.045410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.045438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.045825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.045853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.046268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.046298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.046675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.046704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.047040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.047069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.047343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.047371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.047728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.047756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.048107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.048138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.048568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.048596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.048941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.048970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.049320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.049347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.049695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.049723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.050094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.050123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.050493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.050520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.050887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.050926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.051264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.051291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.051637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.051665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.052036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.052065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.052437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.052465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.052843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.052871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.053282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.053312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.053677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.053710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.053963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.053996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.054360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.054389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.054738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.054765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.055117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.055146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.055532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.055560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.055929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.055958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.056309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.056336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.056718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.056745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.057111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.057143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.057504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.057532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.057892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.057933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.058263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.058290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.058667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.058695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.059084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.059118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.059362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.059390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.059737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.059774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.060015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.060043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.060400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.060428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.060790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.060818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.061180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.061208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.061470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.061497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.061862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.061891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.694 [2024-10-01 15:56:11.062289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.694 [2024-10-01 15:56:11.062318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.694 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.062571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.062599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.062956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.062985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.063352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.063385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.063725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.063753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.063971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.064000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.064324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.064351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.064694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.064722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.065057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.065086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.065451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.065479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.065843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.065871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.066281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.066309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.066668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.066696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.066962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.066991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.067380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.067407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.067773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.067800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.068180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.068209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.068601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.068634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.069027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.069058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.069321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.069349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.069711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.069738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.070165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.070194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.070552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.070579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.070939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.070968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.071329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.071357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.071718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.071746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.072117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.072145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.072512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.072539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.072927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.072956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.073307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.073336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.073572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.073603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.074034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.074064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.074418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.074446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.074851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.074879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.075238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.075266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.075645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.075673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.076032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.076061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.076275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.076305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.076660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.076688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.076959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.076988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.077339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.077368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.077725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.077752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.078103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.078133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.078501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.078528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.078911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.078940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.079296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.079324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.079692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.079719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.080104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.080134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.080528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.080556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.080935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.080964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.081291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.081319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.081686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.081713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.081977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.082005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.082365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.082394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.082776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.082805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.083186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.083215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.083465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.083495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.083849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.083884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.084252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.084281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.084649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.084678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.084948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.084982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.085348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.085376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.085777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.085805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.086134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.086163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.086533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.695 [2024-10-01 15:56:11.086561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.695 qpair failed and we were unable to recover it. 00:38:31.695 [2024-10-01 15:56:11.086934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.086963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.087325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.087353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.087570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.087598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.087933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.087962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.088313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.088341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.088577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.088608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.088974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.089005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.089278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.089307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.089654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.089683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.090035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.090064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.090346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.090374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.090782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.090810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.090984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.091012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.091441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.091469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.091845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.091874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.092238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.092266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.092659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.092688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.093054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.093083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.093335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.093365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.093760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.093789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.094146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.094176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.094544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.094571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.094922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.094950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.095288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.095316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.095681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.095708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.096083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.096112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.096483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.096511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.096881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.096921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.097304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.097333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.097702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.097730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.098096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.098126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.098502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.098529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.098883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.098928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.099339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.099368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.099738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.099767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.100121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.100150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.100517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.100545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.100923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.100951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.101332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.101359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.101735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.101763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.102114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.102144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.102495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.102522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.102892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.102930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.103286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.103313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.103664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.103691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.104076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.104106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.104500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.104528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.104780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.104807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.105154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.105183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.105409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.105438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.105800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.105828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.106186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.106216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.106608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.106635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.106988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.107017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.107248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.107278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.107677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.107705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.108049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.696 [2024-10-01 15:56:11.108078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.696 qpair failed and we were unable to recover it. 00:38:31.696 [2024-10-01 15:56:11.108431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.108460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.108819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.108846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.109225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.109255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.109621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.109650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.109971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.110000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.110372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.110400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.110629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.110660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.111034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.111063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.111411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.111439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.111787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.111815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.112066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.112094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.112448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.112479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.112853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.112881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.113138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.113168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.113542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.113572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.113925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.113960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.114340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.114370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.114618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.114646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.115015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.115044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.115400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.115427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.115806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.115834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.116238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.116267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.116669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.116698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.117055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.117084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.117497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.117525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.117884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.117925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.118284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.118311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.118704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.118733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.119111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.119142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.119511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.119539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.119799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.119827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.120058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.120087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.120308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.120338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.120682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.120717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.121105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.121134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.121548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.121577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.121932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.121961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.122333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.122362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.122697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.122724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.122968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.123000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.123390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.123418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.123673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.123700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.124074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.124104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.124464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.124492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.124863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.124890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.125300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.125328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.125647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.125676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.126059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.126088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.126442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.126470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.126830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.126860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.127228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.127258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.127629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.127658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.128029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.128058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.697 [2024-10-01 15:56:11.128429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.697 [2024-10-01 15:56:11.128457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.697 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.128828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.128861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.129226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.129262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.129620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.129649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.130015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.130044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.130402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.130432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.130805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.130834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.131198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.131228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.131604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.131633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.131996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.132027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.132371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.132399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.132799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.132827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.133186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.133216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.133578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.133605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.133827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.133854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.134202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.134231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.134616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.134644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.135017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.135047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.135405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.135433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.135800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.135828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.136208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.136236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.136595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.136623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.136868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.136908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.137290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.137319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.137686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.137714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.969 qpair failed and we were unable to recover it. 00:38:31.969 [2024-10-01 15:56:11.138074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.969 [2024-10-01 15:56:11.138102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.138472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.138500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.138930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.138959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.139334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.139364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.139754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.139782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.140007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.140039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.140468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.140496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.140724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.140753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.141118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.141147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.141380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.141410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.141684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.141712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.142087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.142115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.142483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.142511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.142873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.142926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.143290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.143317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.143678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.143707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.144052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.144083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.144510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.144544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.144891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.144931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.145291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.145319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.145683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.145710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.146190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.146219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.146590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.146617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.146984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.147012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.147429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.147456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.147820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.147848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.148238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.148268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.148615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.148642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.148873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.148915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.149136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.149168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.149528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.149555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.149818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.149851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.150123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.150152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.150396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.150424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.150770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.150799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.151160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.151190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.151435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.151466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.151845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.151874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.970 [2024-10-01 15:56:11.152235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.970 [2024-10-01 15:56:11.152265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.970 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.152593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.152622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.152978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.153007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.153386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.153414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.153872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.153924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.154276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.154306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.154648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.154676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.155054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.155085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.155469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.155498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.155753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.155780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.156045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.156074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.156358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.156386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.156757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.156786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.157123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.157152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.157392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.157419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.157637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.157666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.157913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.157943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.158321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.158349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.158712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.158740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.159127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.159156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.159529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.159557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.159936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.159965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.160332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.160359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.160615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.160642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.161049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.161078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.161432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.161459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.161814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.161841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.162230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.162276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.162730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.162760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.163115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.163145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.163508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.163537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.163884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.163922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.164267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.164294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.164546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.164576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.164944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.164974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.165344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.165377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.165767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.165795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.971 [2024-10-01 15:56:11.166213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.971 [2024-10-01 15:56:11.166244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.971 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.166633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.166661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.167024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.167054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.167298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.167325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.167588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.167620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.168015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.168044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.168404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.168432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.168826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.168855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.169214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.169243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.169611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.169645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.169878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.169916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.170269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.170296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.170666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.170693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.170939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.170970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.171340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.171368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.171723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.171751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.172133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.172162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.172526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.172554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.172930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.172959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.173324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.173351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.173714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.173742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.174114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.174143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.174388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.174418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.174799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.174827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.175201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.175231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.175597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.175625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.175985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.176014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.176382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.176410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.176779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.176806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.177177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.177206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.177569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.177598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.177997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.178026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.178368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.178396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.178770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.178797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.179152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.179182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.179534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.179562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.179933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.179962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.972 [2024-10-01 15:56:11.180334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.972 [2024-10-01 15:56:11.180362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.972 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.180728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.180756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.180983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.181010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.181375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.181403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.181772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.181800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.182171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.182199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.182427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.182458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.182811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.182839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.183199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.183227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.183590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.183617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.183985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.184014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.184364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.184391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.184719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.184754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.185113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.185142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.185498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.185527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.185876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.185914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.186258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.186286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.186651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.186679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.187046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.187076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.187450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.187478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.187848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.187876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.188251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.188280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.188684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.188712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.189013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.189042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.189431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.189458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.189824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.189852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.190224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.190253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.190606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.190633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.190997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.191027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.191284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.191312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.191666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.191693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.191963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.191991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.192343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.192370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.973 qpair failed and we were unable to recover it. 00:38:31.973 [2024-10-01 15:56:11.192739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.973 [2024-10-01 15:56:11.192768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.193124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.193153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.193522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.193550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.193918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.193946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.194198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.194229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.194602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.194630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.195001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.195030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.195394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.195422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.195786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.195814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.196203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.196233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.196582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.196610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.196825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.196855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.197221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.197251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.197657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.197685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.198043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.198072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.198342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.198369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.198598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.198628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.198870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.198925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.199284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.199311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.199553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.199586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.199934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.199963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.200327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.200355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.200766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.200794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.201049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.201077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.201460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.201487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.201822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.201850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.202212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.202241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.202448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.202478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.202840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.202868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.203240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.203270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.203646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.203674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.203915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.203945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.204285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.204313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.204679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.204708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.205078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.205107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.205479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.205506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.205764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.205791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.974 qpair failed and we were unable to recover it. 00:38:31.974 [2024-10-01 15:56:11.206150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.974 [2024-10-01 15:56:11.206179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.206554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.206582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.206941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.206969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.207322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.207350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.207720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.207748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.207963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.207996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.208356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.208385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.208719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.208746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.209128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.209158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.209532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.209561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.209917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.209947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.210330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.210357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.210721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.210751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.211122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.211151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.211537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.211564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.211931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.211960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.212306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.212349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.212630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.212659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.213003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.213033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.213287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.213314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.213695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.213724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.213993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.214023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.214400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.214439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.214793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.214820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.215175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.215204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.215440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.215471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.215832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.215860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.216228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.216259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.216627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.216655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.217046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.217077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.217428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.217456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.217831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.217861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.218265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.218295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.218716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.218745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.219073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.219102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.219493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.219521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.219883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.219927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.975 [2024-10-01 15:56:11.220293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.975 [2024-10-01 15:56:11.220320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.975 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.220643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.220671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.221034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.221066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.221433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.221461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.221852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.221880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.222240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.222269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.222622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.222651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.223047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.223078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.223423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.223453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.223809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.223837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.224295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.224324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.224681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.224712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.225057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.225087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.225437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.225465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.225850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.225879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.226254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.226282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.226674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.226703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.227073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.227102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.227481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.227509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.227776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.227805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.228156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.228185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.228417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.228447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.228809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.228836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.229252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.229283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.229625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.229653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.230022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.230058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.230392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.230422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.230791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.230820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.231195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.231224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.231488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.231518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.231747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.231775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.232108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.232137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.232501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.232528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.232887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.232926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.233325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.233354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.233597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.233626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.233968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.234000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.976 [2024-10-01 15:56:11.234357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.976 [2024-10-01 15:56:11.234386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.976 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.234619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.234650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.235012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.235042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.235433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.235462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.235773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.235800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.236162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.236192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.236588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.236618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.236861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.236903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.237238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.237269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.237654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.237684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.238050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.238079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.238446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.238474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.238839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.238867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.239132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.239165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.239434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.239463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.239811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.239839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.240207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.240238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.240489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.240521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.240656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.240687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.241070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.241100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.241355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.241382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.241773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.241801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.242119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.242150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.242518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.242546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.242965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.242995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.243227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.243259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.243630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.243660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.244021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.244051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.244423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.244458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.244695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.244723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.245084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.245114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.245470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.245498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.245877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.245919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.246183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.246215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.246555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.246591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.246955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.246986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.247338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.247368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.247731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.977 [2024-10-01 15:56:11.247761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.977 qpair failed and we were unable to recover it. 00:38:31.977 [2024-10-01 15:56:11.247998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.248027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.248396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.248426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.248851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.248879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.249248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.249278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.249681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.249710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.249952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.249982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.250416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.250444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.250781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.250810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.251154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.251184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.251544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.251572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.251940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.251969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.252244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.252273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.252659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.252687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.253044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.253075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.253398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.253427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.253838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.253867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2398000b90 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.254403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.254518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.254923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.254973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.255409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.255446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.255837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.255875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.256409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.256513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.257127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.257231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.257600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.257644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.258148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.258252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.258585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.258629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.259009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.259048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.259465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.259501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.259906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.259947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.260348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.260385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.260760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.978 [2024-10-01 15:56:11.260794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.978 qpair failed and we were unable to recover it. 00:38:31.978 [2024-10-01 15:56:11.261032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.261071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.261468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.261505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.261911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.261948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.262362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.262397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.262695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.262731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.263147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.263185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.263572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.263609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.263853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.263889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.264179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.264215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.264475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.264513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.264788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.264825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.265244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.265279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.265676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.265712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.266116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.266155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.266555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.266592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.266854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.266890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.267316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.267358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.267760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.267801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.268116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.268158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.268614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.268654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.269074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.269117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.269517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.269557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.269914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.269956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.270345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.270385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.270671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.270714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.271035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.271078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.271482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.271524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.271723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.271775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.272036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.272078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.272332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.272371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.272657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.272696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.272955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.272996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.273311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.273351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.273719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.273760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.274151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.274192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.274556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.274595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.275005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.979 [2024-10-01 15:56:11.275047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.979 qpair failed and we were unable to recover it. 00:38:31.979 [2024-10-01 15:56:11.275419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.275459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.275718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.275758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.276149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.276190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.276560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.276601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.277020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.277070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.277501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.277542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.277843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.277884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.278329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.278368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.278623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.278662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.279075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.279116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.279522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.279561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.279957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.279996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.280319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.280359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.280807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.280845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.281283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.281323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.281704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.281744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.282109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.282151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.282572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.282610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.282963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.283004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.283402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.283441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.283832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.283870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.284246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.284286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.284622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.284661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.285017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.285057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.285359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.285403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.285757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.285797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.286142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.286183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.286582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.286622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.287012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.287053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.287447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.287486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.287885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.287938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.288333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.288381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.288807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.288846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.289247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.289288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.289672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.289711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.290053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.290095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.290496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.290539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.290943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.290985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.980 [2024-10-01 15:56:11.291412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.980 [2024-10-01 15:56:11.291451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.980 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.291873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.291933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.292362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.292402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.292688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.292727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.293103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.293143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.293499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.293539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.293954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.293994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.294436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.294475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.294788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.294825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.295326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.295366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.295729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.295767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.296136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.296177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.296551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.296590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.296995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.297035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.297430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.297469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.297869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.297922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.298327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.298365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.298792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.298829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.299238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.299278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.299633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.299673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.300032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.300072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.300481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.300520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.300931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.300972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.301332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.301371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.301724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.301764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.302193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.302235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.302668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.302707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.303063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.303103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.303498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.303536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.303966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.304008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.304403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.304442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.304835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.304874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.305283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.305323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.305726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.305765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.306151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.306192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.306575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.306614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.306941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.306981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.307369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.307408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.981 [2024-10-01 15:56:11.307807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.981 [2024-10-01 15:56:11.307845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.981 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.308259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.308300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.308692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.308730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.309153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.309194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.309593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.309633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.310034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.310074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.310241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.310293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.310676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.310717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.311069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.311110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.311536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.311575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.311958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.311999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.312379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.312417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.312816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.312854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.313275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.313317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.313670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.313708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.314065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.314106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.314465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.314504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.314892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.314947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.315382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.315421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.315816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.315855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.316263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.316303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.316693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.316732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.317022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.317063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.317478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.317527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.317929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.317970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.318295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.318335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.318684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.318723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.319116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.319156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.319583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.319623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.319978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.320018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.320438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.320477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.320936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.320977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.321361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.321402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.321755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.321795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.322196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.322241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.322626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.322665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.323058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.323098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.323530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.323570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.982 [2024-10-01 15:56:11.323929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.982 [2024-10-01 15:56:11.323969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.982 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.324361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.324400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.324792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.324831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.325245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.325286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.325714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.325753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.326150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.326191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.326586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.326624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.327031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.327071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.327472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.327513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.327941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.327981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.328372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.328411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.328717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.328755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.329121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.329170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.329594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.329633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.329990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.330031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.330397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.330436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.330837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.330875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.331276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.331316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.331516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.331554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.331833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.331872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.332282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.332321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.332705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.332744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.333122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.333163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.333566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.333605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.333957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.333997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.334399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.334438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.334812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.334852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.335264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.335304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.335688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.335727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.336117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.336158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.336587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.336627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.336992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.337032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.337427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.337466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.337888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.983 [2024-10-01 15:56:11.337947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.983 qpair failed and we were unable to recover it. 00:38:31.983 [2024-10-01 15:56:11.338370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.338408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.338766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.338805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.339164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.339206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.339558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.339596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.339953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.339995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.340348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.340395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.340768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.340806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.341077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.341121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.341527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.341568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.341959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.341999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.342392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.342431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.342825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.342864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.343339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.343378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.343773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.343812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.344205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.344246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.344672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.344711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.345064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.345104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.345511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.345550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.345916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.345956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.346386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.346426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.346853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.346892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.347329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.347368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.347690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.347729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.348121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.348162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.348517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.348555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.348922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.348963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.349391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.349430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.349818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.349858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.350280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.350321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.350746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.350785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.351185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.351226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.351668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.351707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.352068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.352109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.352524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.352564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.352958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.352998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.353294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.984 [2024-10-01 15:56:11.353334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.984 qpair failed and we were unable to recover it. 00:38:31.984 [2024-10-01 15:56:11.353723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.353762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.354152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.354192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.354615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.354654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.355007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.355048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.355467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.355506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.355933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.355972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.356311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.356350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.356705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.356743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.357097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.357137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.357522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.357562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.357924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.357965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.358392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.358432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.358728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.358770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.359142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.359183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.359575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.359615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.359957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.359998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.360393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.360433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.360832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.360871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.361290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.361330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.361723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.361762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.362145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.362186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.362566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.362605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.362976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.363016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.363398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.363438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.363836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.363875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.364168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.364208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.364485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.364524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.364839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.364878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.365329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.365368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.365724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.365765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.366026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.366069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.366366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.366405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.366793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.366831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.367211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.367251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.367678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.367717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.985 [2024-10-01 15:56:11.367950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.985 [2024-10-01 15:56:11.367991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.985 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.368379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.368420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.368805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.368855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.369267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.369307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.369702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.369741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.370068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.370109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.370465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.370503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.370861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.370910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.371302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.371343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.371657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.371695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.372078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.372119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.372551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.372590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.372871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.372922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.373358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.373397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.373761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.373800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.374163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.374205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.374566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.374606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.374961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.375001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.375343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.375382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.375751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.375790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.376151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.376193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.376594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.376634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.377038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.377078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.377487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.377525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.377921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.377960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.378364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.378402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.378763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.378802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.379208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.379249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.379678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.379718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.380073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.380122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.380411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.380449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.380836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.380876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.381308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.381347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.381777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.381816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.382214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.986 [2024-10-01 15:56:11.382255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.986 qpair failed and we were unable to recover it. 00:38:31.986 [2024-10-01 15:56:11.382666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.382706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.383057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.383097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.383369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.383408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.383808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.383846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.384223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.384263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.384698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.384736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.385060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.385100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.385501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.385540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.385951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.385992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.386396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.386435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.386831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.386869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.387262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.387301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.387727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.387766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.388154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.388197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.388488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.388527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.388921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.388962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.389361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.389400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.389825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.389863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.390273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.390314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.390672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.390712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.391068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.391109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.391464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.391503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.391865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.391915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.392191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.392230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.392625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.392663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.393063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.393104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.393500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.393538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.393938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.393980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.394376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.394416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.394848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.394888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.395175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.395218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.395645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.395684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.396037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.396077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.396372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.396415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.396829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.396869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.397297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.397338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.987 [2024-10-01 15:56:11.397757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.987 [2024-10-01 15:56:11.397797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.987 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.398198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.398240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.398625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.398663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.399024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.399064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.399469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.399509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.399775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.399813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.400208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.400248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.400644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.400684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.401085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.401125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.401525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.401564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.401950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.401991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.402406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.402445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.402847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.402885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.403357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.403399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.403755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.403793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.404195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.404236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.404662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.404701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.405070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.405110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.405406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.405444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.405842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.405881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.406271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.406311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.406668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.406707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.406959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.407000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.407373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.407412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.407767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.407805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.408227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.408267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.408660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.408708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.409136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.409177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.409568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.409607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.410004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.410045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.988 [2024-10-01 15:56:11.410441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.988 [2024-10-01 15:56:11.410480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.988 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.410838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.410876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.411269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.411309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.411677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.411717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.411988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.412028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.412419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.412458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.412747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.412787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.413154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.413194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:31.989 [2024-10-01 15:56:11.413589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.989 [2024-10-01 15:56:11.413628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:31.989 qpair failed and we were unable to recover it. 00:38:32.261 [2024-10-01 15:56:11.414041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.261 [2024-10-01 15:56:11.414084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.261 qpair failed and we were unable to recover it. 00:38:32.261 [2024-10-01 15:56:11.414356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.261 [2024-10-01 15:56:11.414401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.261 qpair failed and we were unable to recover it. 00:38:32.261 [2024-10-01 15:56:11.414795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.261 [2024-10-01 15:56:11.414834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.261 qpair failed and we were unable to recover it. 00:38:32.261 [2024-10-01 15:56:11.415238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.261 [2024-10-01 15:56:11.415278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.261 qpair failed and we were unable to recover it. 00:38:32.261 [2024-10-01 15:56:11.415673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.261 [2024-10-01 15:56:11.415712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.261 qpair failed and we were unable to recover it. 00:38:32.261 [2024-10-01 15:56:11.416106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.261 [2024-10-01 15:56:11.416148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.416535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.416574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.416933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.416973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.417361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.417400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.417823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.417862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.418296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.418336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.418762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.418801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.419155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.419196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.419627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.419666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.420021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.420070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.420432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.420470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.420829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.420868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.421316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.421356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.421709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.421747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.422122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.422163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.422518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.422558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.422956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.422999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.423423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.423462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.423821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.423860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.424243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.424282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.424636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.424675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.425025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.425066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.425460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.425499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.425938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.425979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.426392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.426431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.426829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.426869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.427294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.427334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.427760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.427798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.428209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.428250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.428547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.428587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.428959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.428999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.429356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.429394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.429820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.429859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.430270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.430309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.430736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.430776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.431141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.431181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.431608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.431656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.262 [2024-10-01 15:56:11.432013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.262 [2024-10-01 15:56:11.432053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.262 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.432413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.432451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.432808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.432846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.433218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.433259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.433655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.433694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.434032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.434072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.434470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.434510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.434924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.434965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.435375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.435415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.435817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.435856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.436263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.436302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.436724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.436763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.437142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.437183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.437620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.437659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.437937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.437977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.438341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.438380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.438731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.438769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.439130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.439170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.439594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.439634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.439994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.440034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.440302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.440344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.440727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.440767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.441145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.441185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.441536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.441575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.441930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.441972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.442368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.442406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.442804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.442844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.443338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.443378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.443742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.443781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.444184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.444223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.444622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.444661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.445058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.445098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.445498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.445536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.445964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.446005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.446328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.446367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.446783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.446822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.447217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.263 [2024-10-01 15:56:11.447257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.263 qpair failed and we were unable to recover it. 00:38:32.263 [2024-10-01 15:56:11.447679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.447718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.448086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.448125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.448498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.448537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.448939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.448980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.449407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.449446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.449861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.449914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.450346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.450385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.450786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.450823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.451247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.451288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.451693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.451732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.452165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.452206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.452607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.452646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.453047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.453087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.453534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.453573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.454002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.454043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.454439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.454480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.454781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.454823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.455298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.455342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.455767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.455806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.456209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.456249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.456602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.456641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.456995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.457036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.457459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.457499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.457849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.457889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.458296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.458336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.458779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.458818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.459232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.459273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.459670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.459708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.460107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.460151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.460554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.460595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.460957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.461007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.461395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.461434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.461862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.461917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.462330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.462371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.462787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.462826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.463290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.463331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.463747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.463786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.464059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.264 [2024-10-01 15:56:11.464102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 15:56:11.464457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.464496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.464884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.464951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.465311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.465350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.465745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.465786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.466065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.466110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.466471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.466512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.466881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.466934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.467365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.467405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.467802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.467842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.468147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.468187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.468496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.468535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.468960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.469003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.469301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.469342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.469607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.469646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.470030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.470073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.470468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.470508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.470867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.470917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.471304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.471343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.471774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.471814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.472274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.472323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.472620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.472663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.473043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.473083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.473476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.473516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.473913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.473955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.474350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.474391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.474748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.474788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.475249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.475291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.475652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.475693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.475967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.476007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.476454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.476493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.476903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.476944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.477350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.477390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.477788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.477827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.478249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.478291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.478690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.478729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.479112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.479152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.479579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.265 [2024-10-01 15:56:11.479619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 15:56:11.479983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.480024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.480382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.480422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.480782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.480823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.481297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.481338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.481761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.481800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.482072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.482115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.482409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.482448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.482863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.482918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.483276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.483315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.483689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.483729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.484024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.484067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.484370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.484409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.484773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.484813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.485198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.485240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.485533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.485575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.485866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.485931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.486322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.486362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.486761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.486801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.487231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.487274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.487928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.487978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.488396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.488440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.488810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.488851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.489162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.489203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.489647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.489687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.490120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.490163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.490591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.490631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.491005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.491048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.491408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.491448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.491861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.491914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.492290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.492330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.492709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.492751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.493136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.493178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.493610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.493650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 15:56:11.494015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.266 [2024-10-01 15:56:11.494057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.494459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.494500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.494962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.495004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.495389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.495429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.495853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.495906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.496302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.496343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.496597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.496637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.496926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.496968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.497379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.497419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.497724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.497763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.498136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.498178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.498568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.498608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.498976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.499019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.499428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.499467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.499844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.499885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.500336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.500378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.500743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.500782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.501140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.501189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.501539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.501580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.501845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.501885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.502187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.502226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.502615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.502655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.503043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.503085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.503469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.503508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.503917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.503957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.504352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.504391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.504818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.504858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.505258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.505298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.505701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.505741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.506124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.506167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.506557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.506598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.506853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.506910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.507361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.507402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.507829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.507870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.508294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.508334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.508754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.508794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.509212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.267 [2024-10-01 15:56:11.509255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.267 qpair failed and we were unable to recover it. 00:38:32.267 [2024-10-01 15:56:11.509506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.509545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.509944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.509987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.510391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.510432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.510851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.510891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.511264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.511305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.511702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.511743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.512152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.512194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.512598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.512646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.512931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.512976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.513391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.513430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.513861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.513914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.514324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.514364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.514794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.514833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.515156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.515196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.515626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.515666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.516023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.516064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.516462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.516500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.516759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.516798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.517155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.517196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.517626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.517665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.518023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.518064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.518456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.518495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.518917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.518957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.519354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.519393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.519740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.519779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.520146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.520187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.520459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.520498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.520922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.520964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.521341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.521380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.521741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.521781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.522139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.522180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.522606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.522644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.523035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.523076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.523338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.523378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.523791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.523837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.524221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.524261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.524564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.268 [2024-10-01 15:56:11.524603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.268 qpair failed and we were unable to recover it. 00:38:32.268 [2024-10-01 15:56:11.524980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.525020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.525378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.525418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.525774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.525815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.526057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.526100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.526528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.526567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.526920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.526960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.527354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.527394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.527817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.527856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.528300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.528341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.528606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.528648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.529041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.529083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.529405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.529445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.529850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.529889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.530299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.530339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.530718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.530757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.531050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.531093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.531529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.531568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.531927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.531967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.532440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.532480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.532833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.532871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.533295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.533335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.533749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.533788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.534151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.534193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.534566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.534606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.534963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.535003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.535401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.535440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.535853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.535892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.536328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.536368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.536690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.536728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.537122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.537164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.537532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.537573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.537969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.538010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.538423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.538462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.538857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.538906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.539340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.539379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.539799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.539839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.540199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.540240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.269 qpair failed and we were unable to recover it. 00:38:32.269 [2024-10-01 15:56:11.540604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.269 [2024-10-01 15:56:11.540643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.541023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.541064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.541420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.541459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.541817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.541856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.542199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.542239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.542592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.542632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.542991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.543032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.543396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.543435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.543790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.543828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.544204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.544244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.544648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.544687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.545096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.545136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.545400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.545442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.545815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.545854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.546226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.546266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.546628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.546668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.547070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.547112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.547507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.547547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.547955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.547997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.548408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.548448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.548842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.548880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.549306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.549346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.549684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.549722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.550121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.550162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.550551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.550590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.550950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.550991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.551403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.551442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.551912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.551952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.552246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.552292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.552702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.552741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.553103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.553144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.553568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.553607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.553966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.554007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.554397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.554437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.554810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.554849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.555269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.555311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.270 [2024-10-01 15:56:11.555706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.270 [2024-10-01 15:56:11.555745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.270 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.556135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.556176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.556539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.556578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.556841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.556883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.557264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.557303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.557709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.557748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.558143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.558183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.558611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.558650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.559005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.559046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.559432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.559471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.559909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.559949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.560360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.560400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.560828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.560867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.561275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.561315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.561739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.561777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.562136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.562177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.562604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.562643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.563000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.563040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.563477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.563516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.563878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.563943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.564323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.564362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.564665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.564704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.565115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.565156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.565423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.565463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.565875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.565928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.566294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.566334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.566736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.566775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.567216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.567257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.567683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.567722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.568079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.568118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.568546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.568586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.271 [2024-10-01 15:56:11.568949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.271 [2024-10-01 15:56:11.568988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.271 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.569375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.569416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.569861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.569913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.570312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.570352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.570711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.570750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.571136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.571178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.571608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.571647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.572037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.572077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.572462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.572502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.572858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.572912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.573198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.573238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.573661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.573699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.574159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.574200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.574588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.574630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.575026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.575067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.575468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.575507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.575920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.575963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.576375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.576413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.576773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.576812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.577232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.577274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.577656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.577694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.578052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.578094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.578450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.578489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.578872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.578936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.579258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.579297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.579618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.579657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.580016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.580057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.580444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.580485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.580839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.580879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.581326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.581367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.581763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.581803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.582217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.582259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.582666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.582705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.583109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.583150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.272 qpair failed and we were unable to recover it. 00:38:32.272 [2024-10-01 15:56:11.583588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.272 [2024-10-01 15:56:11.583629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.584030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.584072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.584467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.584506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.584815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.584854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.585224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.585264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.585691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.585731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.586123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.586163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.586590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.586629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.586986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.587027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.587425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.587465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.587823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.587862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.588262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.588303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.588709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.588749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.589071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.589113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.589517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.589556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.589955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.589996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.590387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.590426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.590824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.590863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.591266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.591306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.591735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.591775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.592071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.592112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.592500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.592539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.592892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.592953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.593376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.593417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.593846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.593885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.594249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.594289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.594645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.594684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.595042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.595084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.595443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.595481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.595840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.595878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.596279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.596320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.596702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.596742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.597115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.597157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.597568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.597608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.597968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.598008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.598371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.598411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.598811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.598851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.599262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.599301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.273 qpair failed and we were unable to recover it. 00:38:32.273 [2024-10-01 15:56:11.599610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.273 [2024-10-01 15:56:11.599650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.600019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.600061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.600446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.600485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.600872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.600941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.601341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.601381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.601803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.601842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.602228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.602269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.602624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.602664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.603022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.603063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.603449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.603488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.603861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.603917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.604225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.604272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.604673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.604713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.605140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.605181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.605582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.605623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.606021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.606062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.606480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.606518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.606921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.606961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.607381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.607421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.607825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.607864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.608250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.608290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.608674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.608713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.609103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.609144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.609576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.609616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.609970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.610012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.610402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.610442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.610749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.610788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.611146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.611189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.611570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.611611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.611968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.612009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.612425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.612466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.612820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.612859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.613326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.613368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.613765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.613804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.614502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.614545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.614952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.274 [2024-10-01 15:56:11.614994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.274 qpair failed and we were unable to recover it. 00:38:32.274 [2024-10-01 15:56:11.615400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.615440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.615844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.615884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.616222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.616270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.616664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.616703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.617072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.617112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.617537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.617578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.617892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.617963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.618404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.618444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.618868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.618925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.619364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.619404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.619764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.619804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.620213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.620255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.620649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.620689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.621052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.621093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.621451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.621490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.621847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.621886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.622344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.622384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.622778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.622817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.623242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.623282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.623680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.623721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.624115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.624154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.624558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.624597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.624993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.625033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.625459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.625499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.625853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.625892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.626304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.626344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.626745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.626784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.627243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.627286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.627650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.627689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.628096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.628137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.628504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.628543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.628921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.628961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.629377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.629417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.629810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.629849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.630278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.630333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.630723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.630772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.631178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.631233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.275 qpair failed and we were unable to recover it. 00:38:32.275 [2024-10-01 15:56:11.631508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.275 [2024-10-01 15:56:11.631555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.631838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.631890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.632312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.632370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.632766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.632822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.633212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.633270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.633720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.633774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.634157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.634219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.634649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.634698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.635076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.635126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.635507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.635559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.635996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.636055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.636499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.636552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.636928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.636981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.637381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.637437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.637834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.637868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.638139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.638170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.638511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.638539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.638915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.638945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.639304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.639332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.639611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.639639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.640011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.640042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.640379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.640407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.640763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.640793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.641171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.641207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.641563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.641592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.641839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.641870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.642263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.642293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.642653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.642683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.643028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.643058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.643414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.643443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.643842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.643872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.644295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.276 [2024-10-01 15:56:11.644325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.276 qpair failed and we were unable to recover it. 00:38:32.276 [2024-10-01 15:56:11.644669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.644698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.644847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.644884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.645225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.645255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.645608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.645637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.645985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.646015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.646402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.646431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.646770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.646797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.647165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.647194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.647441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.647470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.647719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.647750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.648132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.648161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.648517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.648545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.648921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.648950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.649384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.649412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.649769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.649799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.650134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.650164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.650571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.650599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.650964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.650994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.651355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.651383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.651738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.651767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.652116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.652145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.652497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.652525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.652906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.652940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.653335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.653364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.653573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.653602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.653973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.654004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.654367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.654396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.654750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.654779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.655147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.655189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.655432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.655461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.655819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.655848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.656278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.656309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.656649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.656678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.656925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.656958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.657350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.657380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.657734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.657765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.658114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.658145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.658514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.277 [2024-10-01 15:56:11.658543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.277 qpair failed and we were unable to recover it. 00:38:32.277 [2024-10-01 15:56:11.658929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.658958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.659345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.659374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.659737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.659766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.660004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.660033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.660399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.660429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.660782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.660811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.661236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.661266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.661503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.661531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.661891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.661943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.662333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.662362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.662705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.662734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.663102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.663135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.663413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.663441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.663821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.663849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.664200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.664229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.664583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.664610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.664944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.664973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.665254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.665283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.665631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.665660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.666016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.666046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.666375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.666404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.666759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.666788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.667043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.667072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.667408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.667437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.667685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.667712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.667966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.667999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.668367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.668396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.668759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.668788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.669022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.669051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.669453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.669482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.669843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.669873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.670290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.670322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.670670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.670700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.671056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.671085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.671445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.671473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.671841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.671870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.672235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.672264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.278 [2024-10-01 15:56:11.672609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.278 [2024-10-01 15:56:11.672640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.278 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.672994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.673023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.673376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.673404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.673644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.673670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.674054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.674082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.674446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.674473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.674828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.674865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.675213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.675242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.675601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.675630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.676044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.676074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.676440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.676468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.676832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.676859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.677229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.677257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.677623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.677651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.677907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.677937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.678208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.678236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.678593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.678621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.678976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.679006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.679379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.679407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.679778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.679806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.680216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.680247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.680611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.680647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.681005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.681035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.681390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.681417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.681780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.681807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.682178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.682207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.682586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.682614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.682999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.683027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.683285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.683312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.683655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.683682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.684039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.684069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.684438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.684465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.684695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.684722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.685077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.685106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.685481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.685508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.685644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.279 [2024-10-01 15:56:11.685671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.279 qpair failed and we were unable to recover it. 00:38:32.279 [2024-10-01 15:56:11.686054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.686083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.686335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.686363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.686696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.686726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.687092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.687121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.687444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.687473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.687840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.687869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.688242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.688270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.688626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.688653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.689007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.689037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.689386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.689415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.689663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.689690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.690039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.690069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.690344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.690377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.690739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.690767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.691144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.691174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.691518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.691546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.691851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.691920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.692163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.692192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.692533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.692560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.692912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.692942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.693273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.693300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.693652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.693679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.694027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.694057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.694399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.694427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.694773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.694803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.695142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.695171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.695525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.695553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.695878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.695916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.696269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.696297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.696576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.696604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.696969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.696998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.697306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.697335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.697677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.697704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.698038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.698068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-10-01 15:56:11.698316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-10-01 15:56:11.698343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.698586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.698613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.698959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.698987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.699336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.699363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.699704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.699732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.700094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.700128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.700492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.700519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.700919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.700948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.701314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.701341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.701697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.701724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.702060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.702088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.702525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.702553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3410639 Killed "${NVMF_APP[@]}" "$@" 00:38:32.281 [2024-10-01 15:56:11.702912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.702941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.703291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.703318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:32.281 [2024-10-01 15:56:11.703649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.703677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:32.281 [2024-10-01 15:56:11.704015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.704044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:32.281 [2024-10-01 15:56:11.704432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.704460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:32.281 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:32.281 [2024-10-01 15:56:11.704789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.704817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.705118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.705147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-10-01 15:56:11.705487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-10-01 15:56:11.705515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.705857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.705886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.706151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.706181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.706559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.706587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.706926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.706956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.707316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.707344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.707695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.707722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.707966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.707999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.708356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.708385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.708713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.708742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.709100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.709128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.709489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.709518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.709906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.709936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.710179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.710207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.710610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.710639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.710911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.710940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.711289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.711317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.711652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.551 [2024-10-01 15:56:11.711681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.551 qpair failed and we were unable to recover it. 00:38:32.551 [2024-10-01 15:56:11.711918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.711948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.712304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.712332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3411606 00:38:32.552 [2024-10-01 15:56:11.712564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.712592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3411606 00:38:32.552 [2024-10-01 15:56:11.712930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.712961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3411606 ']' 00:38:32.552 [2024-10-01 15:56:11.713358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.713392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.552 [2024-10-01 15:56:11.713705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:32.552 [2024-10-01 15:56:11.713735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.713932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.713962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.552 [2024-10-01 15:56:11.714331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:32.552 [2024-10-01 15:56:11.714361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 15:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:32.552 [2024-10-01 15:56:11.714696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.714725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.715025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.715056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.715321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.715349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.715686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.715714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.716078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.716111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.716492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.716520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.716854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.716883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.717121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.717158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.717508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.717536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.717770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.717799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.718123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.718153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.718492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.718521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.718884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.718922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.719214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.719243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.719617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.719645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.719917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.719947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.720290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.720319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.720666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.720694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.720946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.720975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.721210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.721243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.721546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.721574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.721840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.721869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.722152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.722183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.722540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.722569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.722787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.722816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.722959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.722989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.723259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.723287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.723609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.723638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.723864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.723913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.724206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.724249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.724635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.552 [2024-10-01 15:56:11.724674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.552 qpair failed and we were unable to recover it. 00:38:32.552 [2024-10-01 15:56:11.725078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.725119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.725473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.725512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.725885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.725938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.726193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.726241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.726519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.726559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.726735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.726784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.726974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.727023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.727389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.727428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.727684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.727723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.727923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.727962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.728347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.728387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.728765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.728804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.729011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.729051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.729393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.729432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.729848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.729887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.730186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.730226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.730504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.730542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.730937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.730979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.731368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.731407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.731717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.731755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.732231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.732271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.732627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.732666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.733000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.733040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.733319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.733362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.733711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.733750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.734147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.734187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.734469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.734508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.734847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.734886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.735069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.735118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.735358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.735399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.735643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.735690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.735966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.736007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.736396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.736436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.736717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.736756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.737151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.737191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.737490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.737529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.737931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.737971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.738367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.553 [2024-10-01 15:56:11.738407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.553 qpair failed and we were unable to recover it. 00:38:32.553 [2024-10-01 15:56:11.738745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.738784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.739040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.739080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.739484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.739523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.739684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.739735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.740024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.740064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.740435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.740474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.740888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.740948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.741242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.741282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.741536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.741576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.741930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.741971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.742368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.742406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.742637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.742675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.742946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.742987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.743392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.743430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.743828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.743867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.744262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.744303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.744693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.744731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.745172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.745213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.745606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.745645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.745883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.745936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.746321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.746361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.746748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.746787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.746955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.747003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.747259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.747298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.747675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.747715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.748054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.748095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.748374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.748414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.748799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.748838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.749242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.749281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.749659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.749698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.749976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.750016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.750438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.750475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.750915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.750954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.751331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.751372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.751757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.751796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 [2024-10-01 15:56:11.752189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.554 [2024-10-01 15:56:11.752229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c58360 with addr=10.0.0.2, port=4420 00:38:32.554 qpair failed and we were unable to recover it. 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Write completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Read completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.554 Write completed with error (sct=0, sc=8) 00:38:32.554 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Write completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 Read completed with error (sct=0, sc=8) 00:38:32.555 starting I/O failed 00:38:32.555 [2024-10-01 15:56:11.752853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:32.555 [2024-10-01 15:56:11.753249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.753302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.753514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.753546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.753880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.753926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.754310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.754338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.754675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.754705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.754964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.754996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.755347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.755375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.755753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.755780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.755981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.756008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.756410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.756439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.756686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.756713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.756928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.756957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.757307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.757335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.757569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.757596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.757941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.757969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.758062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.758087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.758479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.758507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.758857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.758878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.759132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.759155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.759490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.759511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.759848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.759870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.760059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.760081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.760428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.760449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.760756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.760777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.761099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.761120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.761446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.555 [2024-10-01 15:56:11.761467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.555 qpair failed and we were unable to recover it. 00:38:32.555 [2024-10-01 15:56:11.761853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.761876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.762097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.762119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.762404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.762425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.762785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.762805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.762885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.762917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.763107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.763122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.765481] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:38:32.556 [2024-10-01 15:56:11.765526] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:32.556 [2024-10-01 15:56:11.767143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.767170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.767520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.767541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.767751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.767772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.768118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.768141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.768371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.768390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.768729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.768752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.769139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.769161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.769497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.769518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.769860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.769883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.770151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.770170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.770499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.770517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.770874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.770892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.770975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.770992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.771344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.771362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.771572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.771589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.771770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.771786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.772115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.772133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.772342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.772360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.772702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.772719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.773058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.773078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.773417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.773434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.773782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.773800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.774007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.774025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.774360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.774378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.774558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.774576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.774915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.774933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.775285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.775303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.775653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.775670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.775868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.775886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.776228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.776245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.776708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.776725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.776933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.776953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.777169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.777187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.777390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.777407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.777836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.777851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.778076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.778094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.778190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.778206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.778535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.778557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.778811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.556 [2024-10-01 15:56:11.778828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.556 qpair failed and we were unable to recover it. 00:38:32.556 [2024-10-01 15:56:11.779128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.779143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.779514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.779524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.779841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.779850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.780221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.780232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.780536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.780546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.780871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.780882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.781209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.781219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.781537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.781546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.781723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.781733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.782112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.782122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.782465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.782474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.782770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.782779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.783117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.783127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.783443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.783453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.783740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.783749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.784062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.784072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.784246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.784255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.784304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.784314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.784613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.784623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.784796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.784807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.785156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.785166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.785463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.785472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.785792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.785801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.786119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.786130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.786444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.786454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.786817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.786827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.787024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.787035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.787299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.787308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.787476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.787485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.787773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.787784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.787997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.788007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.788338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.788347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.788540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.788549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.788857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.788867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.789229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.789238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.789546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.789556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.789867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.789876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.790080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.790090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.790274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.790286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.790478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.790488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.790775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.790784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.791115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.791125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.791430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.791441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.791756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.791766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.792068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.792078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.792382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.792391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.792562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.792572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.792795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.792805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.793137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.793147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.793474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.793485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.793755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.793766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.794019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.794029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.794367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.794378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.794694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.794704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.795043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.795054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.795363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.795375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.795686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.795696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.796014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.796025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.796355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.796365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.796679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.796690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.797004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.797015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.797414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.797425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.797747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.797757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.798069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.798079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.798407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.798417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.798750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.798762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.798825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.798836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.799030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.799042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.799392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.799402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.799696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.799707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.800023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.800034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.800349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.800360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.800540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.557 [2024-10-01 15:56:11.800551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.557 qpair failed and we were unable to recover it. 00:38:32.557 [2024-10-01 15:56:11.800869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.800880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.801095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.801106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.801377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.801387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.801580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.801591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.801939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.801951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.802119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.802132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.802304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.802315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.802651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.802662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.802896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.802908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.803267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.803277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.803472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.803483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.803693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:32.558 [2024-10-01 15:56:11.803817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.803828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.804186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.804198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.804511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.804524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.804852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.804866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.805197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.805211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.805423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.805438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.805765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.805779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.806114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.806129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.806433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.806446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.806777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.806797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.806884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.806902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.807090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.807104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.807386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.807399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.807732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.807746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.808073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.808087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.808375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.808388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.808607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.808621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.808911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.808926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.809262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.809276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.809605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.809618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.809943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.809960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.810356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.810370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.810726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.810739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.811068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.811081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.811399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.811412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.811723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.811736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.812067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.812081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.812405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.812419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.812660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.812673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.813014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.813028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.813343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.813358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.813656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.813669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.814025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.814040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.814264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.814277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.814472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.814486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.814693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.814706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.815034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.815055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.815466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.815485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.815779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.815797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.815988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.816008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.816326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.816345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.816654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.816672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.816858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.816879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.817158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.817176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.817523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.817542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.817881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.817911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.818134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.818162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.818400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.818419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.818760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.818779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.818984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.819004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.819346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.819365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.819694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.819713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.819934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.819954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.820334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.820353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.820704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.820723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.821058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.821077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.821424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.821443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.558 qpair failed and we were unable to recover it. 00:38:32.558 [2024-10-01 15:56:11.821651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.558 [2024-10-01 15:56:11.821672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.822016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.822035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.822367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.822386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.822813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.822836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.823179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.823198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.823548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.823567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.823899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.823920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.824255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.824274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.824633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.824652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.825010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.825029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.825348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.825372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.825741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.825760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.826152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.826171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.826385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.826404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.826784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.826808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.827146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.827173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.827484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.827509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.827732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.827756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.828084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.828110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.828313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.828341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.828681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.828706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.829037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.829065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.829428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.829453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.829765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.829790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.830002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.830028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.830341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.830365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.830564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.830591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.830797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.830823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.831191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.831217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.831549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.831573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.831940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.831967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.832317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.832341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.832556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.832581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.832904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.832931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.833278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.833303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.833616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.833641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.833961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.833987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.834313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.834337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.834690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.834714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.834938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.834966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.835312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.835336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.835703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.835728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.836092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.836119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.836365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.836396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.836825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.836850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.837060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.837084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.837277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.837301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.837536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.837560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.837911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.837937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.838286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.838311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.838618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.838643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.838958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.838983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.839305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.839329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.839665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.839691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.840025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.840052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.840440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.840468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.840713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.840741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.841109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.841138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.841526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.841554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.841786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.841812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.842161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.842190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.842542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.842569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.842934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.842963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.843318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.843344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.843559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.843590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.843941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.843971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.844195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.844222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.844575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.844603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.844807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.844835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.845177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.845205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.845440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.845472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.845713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.845741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.846135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.846163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.846413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.559 [2024-10-01 15:56:11.846440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.559 qpair failed and we were unable to recover it. 00:38:32.559 [2024-10-01 15:56:11.846775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.846802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.847168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.847197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.847420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.847450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.847831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.847860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.848205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.848234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.848621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.848649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.848984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.849019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.849358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.849385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.849753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.849781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.850212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.850246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.850574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.850602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.850945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.850975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.851343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.851371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.851709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.851737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.852105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.852124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:32.560 [2024-10-01 15:56:11.852134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.852478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.852506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.852955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.852985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.853313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.853341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.853672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.853700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.853911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.853941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.854287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.854314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.854563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.854590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.854934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.854963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.855351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.855380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.855745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.855772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.856123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.856153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.856505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.856533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.856756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.856783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.857142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.857172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.857528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.857556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.857799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.857826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.858033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.858063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.858281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.858312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.858549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.858579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.858973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.859003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.859356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.859384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.859770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.859799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.860140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.860169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.860463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.860491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.860710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.860737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.861140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.861169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.861378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.861406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.861662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.861689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.862065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.862095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.862465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.862493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.862841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.862868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.863227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.863257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.863629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.863657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.863996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.864026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.864384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.864418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.864843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.864872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.865242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.865272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.865508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.865535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.865872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.865906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.866248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.866277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.866627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.866656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.866963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.866994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.867253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.867286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.867627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.867656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.868010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.868041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.868415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.868444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.868824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.868852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.869073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.869104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.869480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.869510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.869878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.869914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.870259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.870287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.870644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.870673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.870988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.871017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.871372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.871401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.871759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.871787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.872142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.872173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.872564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.872592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.872836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.872864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.560 [2024-10-01 15:56:11.873237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.560 [2024-10-01 15:56:11.873267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.560 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.873503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.873534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.873912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.873940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.874260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.874289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.874504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.874532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.874886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.874922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.875195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.875222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.875502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.875531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.875878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.875924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.876276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.876304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.876661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.876689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.877045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.877075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.877426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.877453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.877683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.877714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.877939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.877967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.878324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.878352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.878545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.878579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.878926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.878956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.879304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.879332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.879604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.879631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.879978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.880008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.880377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.880406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.880739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.880767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.881014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.881043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.881381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.881409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.881755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.881782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.881987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.882014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.882239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.882268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.882626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.882654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.882952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.882981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.883362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.883391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.883626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.883654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.883972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.884000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.884336] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:32.561 [2024-10-01 15:56:11.884354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.884367] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:32.561 [2024-10-01 15:56:11.884376] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:32.561 [2024-10-01 15:56:11.884383] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:32.561 [2024-10-01 15:56:11.884381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 [2024-10-01 15:56:11.884390] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.884537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:32.561 [2024-10-01 15:56:11.884751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.884779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 [2024-10-01 15:56:11.884694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.884811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:32.561 [2024-10-01 15:56:11.884813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:32.561 [2024-10-01 15:56:11.885195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.885225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.885570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.885597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.885863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.885891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.886275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.886302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.886634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.886661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.887037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.887067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.887393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.887427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.887808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.887836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.888196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.888225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.888477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.888508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.888772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.888800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.889065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.889094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.889413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.889440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.889646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.889674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.890016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.890045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.890400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.890428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.890782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.890809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.891187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.891216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.891450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.891479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.891746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.891774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.891990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.892018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.892391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.892419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.892766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.892794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.893182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.893211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.893547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.893574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.893929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.893957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.894312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.894340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.894695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.894723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.895072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.895100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.895450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.895477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.895826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-10-01 15:56:11.895854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-10-01 15:56:11.896220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.896255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.896603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.896632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.896940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.896970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.897313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.897340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.897631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.897659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.897873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.897909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.898249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.898277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.898399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.898425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.898776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.898804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.899145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.899173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.899433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.899462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.899701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.899728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.899960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.899988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.900112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.900140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.900515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.900544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.900902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.900932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.901364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.901393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.901591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.901618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.901965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.901994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.902364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.902392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.902724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.902753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.903023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.903051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.903408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.903436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.903749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.903777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.904015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.904044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.904395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.904424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.904774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.904802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.905149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.905179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.905415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.905442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.905662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.905690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.905888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.905927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.906283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.906311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.906558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.906585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.906913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.906942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.907163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.907190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.907538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.907565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.907917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.907947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.908292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.908320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.908670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.908698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.908927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.908959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.909318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.909353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.909698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.909727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.910076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.910105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.910497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.910525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.910767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.910794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.911150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.911179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.911517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.911544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.911947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.911975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.912303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.912331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.912548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.912575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.912924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.912953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.913076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.913102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.913365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.913392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.913621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.913649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.914050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.914080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.914394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.914422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.914755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.914783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.915173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.915202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.915541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.915569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.915950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.915978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.916075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.916100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.916407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.916435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.916789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.916817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.917034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.917066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.917405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.917433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.917648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.917674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.918017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.918046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.918458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.918488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.918721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.918748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.918836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.918863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f239c000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.918939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c66260 (9): Bad file descriptor 00:38:32.562 [2024-10-01 15:56:11.919449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.919570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.920154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-10-01 15:56:11.920249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-10-01 15:56:11.920738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.920787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.921319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.921414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.921730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.921779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.922170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.922214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.922472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.922512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.922742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.922781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.923209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.923250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.923510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.923549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.923922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.923963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.924373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.924413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.924823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.924863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.925257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.925297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.925694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.925732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.926019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.926060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.926455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.926495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.926848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.926887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.927303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.927343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.927608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.927647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.928005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.928045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.928427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.928466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.928852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.928891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.929174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.929227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.929490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.929529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.929937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.929979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.930382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.930421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.930810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.930849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.931241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.931283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.931420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.931469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.931741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.931781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.932148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.932189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.932584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.932623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.933022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.933062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.933462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.933501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.933884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.933937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.934299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.934338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.934611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.934651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.935062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.935103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.935375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.935414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.935791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.935830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.936261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.936303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.936690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.936729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.936976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.937015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.937148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.937195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.937612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.937651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.937932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.937974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.938383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.938422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.938817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.938855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.939252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.939293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.939681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.939721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.939974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.940015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.940165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.940214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.940644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.940683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.941079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.941118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.941485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.941524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.941930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.941971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.942264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.942305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.942705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.942744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.943017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.943057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.943303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.943341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.943756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.943794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.944085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.944127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.944514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.944561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.944932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.944973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.945226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.945269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.945674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.945712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.946108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-10-01 15:56:11.946148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-10-01 15:56:11.946411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.946450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.946822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.946861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.947257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.947296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.947678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.947716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.948108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.948147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.948400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.948439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.948804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.948843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.949226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.949266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.949520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.949559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.949859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.949935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.950323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.950363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.950743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.950783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.951173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.951214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.951595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.951634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.952019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.952059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.952445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.952485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.952828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.952867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.953273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.953312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.953692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.953731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.954115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.954155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.954411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.954452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.954871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.954920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.955212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.955252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.955540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.955581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.955850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.955889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.956306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.956345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.956729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.956768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.957053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.957097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.957513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.957552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.957936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.957976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.958379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.958418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.958683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.958722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.959177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.959216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.959596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.959635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.959904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.959944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.960209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.960260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.960655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.960693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.961114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.961154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.961503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.961541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.961920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.961960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.962362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.962401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.962667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.962707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.963091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.963130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.963489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.963527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.963945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.963985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.964379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.964419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.964685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.964725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.965106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.965147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.965531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.965570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.965836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.965875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.966308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.966348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.966618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.966657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.967065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.967106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.967452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.967492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.967905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.967944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.968317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.968357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.968736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.968775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.969219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.969260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.969651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.969690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.969994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.970035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.970421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.970459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.970840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.970878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.971254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.971296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.971678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.971717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.972102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.972142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.972521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.972560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.972813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.972854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.973246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.973286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.973536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.973577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.973821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.973860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.974259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.974300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-10-01 15:56:11.974662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-10-01 15:56:11.974699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.975087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.975128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.975344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.975383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.975753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.975791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.976183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.976234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.976622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.976661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.977049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.977089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.977483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.977522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.977914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.977955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.978346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.978386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.978652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.978691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.979088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.979130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.979520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.979558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.979816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.979854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.980238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.980278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.980530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.980570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.980828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.980870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.981147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.981187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.981595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.981635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.981908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.981948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.982199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.982238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.982501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.982541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.982930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.982970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.983315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.983354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.983737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.983777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.984050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.984094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.984477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.984516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.984910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.984950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.985335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.985375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.985503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.985550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.985944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.985985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.986280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.986322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.986693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.986733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.987140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.987181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.987564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.987603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.987864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.987914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.988312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.988351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.988731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.988771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.989136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.989177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.989543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.989582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.989779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.989818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.990237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.990278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.990658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.990698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.990962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.991002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.991371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.991418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.991781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.991819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.992257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.992298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.992538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.992576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.992954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.992994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.993377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.993416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.993797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.993835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.994106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.994149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.994529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.994569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.994957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.994998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-10-01 15:56:11.995253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-10-01 15:56:11.995292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.995678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.995717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.996098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.996138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.996491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.996530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.996792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.996831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.997221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.997262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.997514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.997553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.997919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.997958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.998354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.998394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.998791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.998830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.999200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.999240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:11.999641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:11.999680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.000059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.000101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.000357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.000396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.000776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.000815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.001211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.001253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.001528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.001567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.001884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.001935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.002205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.002244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.002652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.002691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.003040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.003080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.003337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.003375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.003753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.003792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.004052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.004094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.004484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.004523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.004836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.004875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.005263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.005302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.005554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.005593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.005860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.005909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.006316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.006354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.006744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.006790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.007077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.007117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.840 [2024-10-01 15:56:12.007508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.840 [2024-10-01 15:56:12.007546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.840 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.007944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.007986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.008391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.008430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.008843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.008881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.009279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.009318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.009721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.009760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.010143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.010184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.010446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.010488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.010855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.010905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.011275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.011314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.011726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.011765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.012182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.012222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.012621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.012661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.012957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.012997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.013229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.013269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.013691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.013731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.014106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.014146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.014541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.014580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.014964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.015005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.015260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.015301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.015681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.015721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.016110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.016152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.016518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.016557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.016824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.016863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.017185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.017225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.017615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.017655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.018024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.018065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.018449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.018488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.018882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.018939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.019387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.019427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.019818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.019857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.020281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.020321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.020573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.020611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.021008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.021049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.021404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.021444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.021836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.021875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.022169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.022209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.022477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.022516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.022884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.841 [2024-10-01 15:56:12.022942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.841 qpair failed and we were unable to recover it. 00:38:32.841 [2024-10-01 15:56:12.023345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.023385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.023733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.023772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.024170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.024210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.024475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.024514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.024922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.024963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.025351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.025389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.025770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.025811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.026221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.026262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.026638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.026677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.027069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.027110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.027340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.027379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.027669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.027709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.027966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.028006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.028254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.028293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.028518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.028558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.028933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.028973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.029385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.029424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.029767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.029806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.030195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.030235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.030620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.030660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.031046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.031087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.031468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.031506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.031887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.031937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.032328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.032367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.032599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.032638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.032941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.032981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.033406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.033447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.033674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.033713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.034129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.034169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.034558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.034597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.035034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.035074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.035324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.035363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.035622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.035660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.035921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.035965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.036231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.036271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.036655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.036694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.036956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.036997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.037405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.842 [2024-10-01 15:56:12.037446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.842 qpair failed and we were unable to recover it. 00:38:32.842 [2024-10-01 15:56:12.037691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.037731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.038005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.038055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.038317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.038356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.038728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.038767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.039136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.039175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.039523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.039562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.039955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.039996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.040368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.040406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.040673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.040712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.041109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.041149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.041539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.041578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.041954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.041993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.042374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.042414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.042639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.042678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.043063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.043103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.043517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.043558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.043830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.043868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.044263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.044303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.044707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.044746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.044989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.045030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.045292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.045331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.045740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.045779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.046175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.046216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.046607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.046646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.047093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.047133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.047518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.047558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.047811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.047849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.048246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.048287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.048669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.048717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.048965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.049009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.049275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.049314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.049723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.049762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.050140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.050181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.050548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.050586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.050983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.051023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.051412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.051452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.051833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.051871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.052111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.052150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.843 [2024-10-01 15:56:12.052552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.843 [2024-10-01 15:56:12.052591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.843 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.052714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.052762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.053177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.053218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.053365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.053414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.053658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.053697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.054091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.054131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.054512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.054551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.054906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.054947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.055185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.055223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.055505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.055544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.055959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.055998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.056399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.056438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.056819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.056859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.057148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.057189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.057595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.057634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.058065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.058105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.058456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.058496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.058909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.058950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.059369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.059408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.059627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.059669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.059930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.059970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.060390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.060428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.060687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.060726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.061018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.061060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.061443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.061482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.061825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.061864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.062263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.062303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.062555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.062593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.062965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.063005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.063417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.063457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.063837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.063884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.064289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.064329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.064708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.064747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.844 [2024-10-01 15:56:12.065149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.844 [2024-10-01 15:56:12.065188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.844 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.065574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.065613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.065994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.066034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.066320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.066360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.066637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.066675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.066969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.067010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.067396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.067436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.067567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.067614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.067977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.068018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.068406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.068444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.068768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.068807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.069244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.069285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.069672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.069713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.070095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.070136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.070498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.070538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.070923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.070964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.071348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.071388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.071682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.071721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.071987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.072029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.072346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.072386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.072774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.072812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.073176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.073216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.073586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.073624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.073779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.073825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.074240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.074283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.074664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.074704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.075089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.075130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.075395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.075434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.075690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.075728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.076112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.076153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.076518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.076557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.076824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.076862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.077272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.077312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.077706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.077745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.078182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.078222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.078607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.078646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.079026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.079066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.079343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.079392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.079647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.845 [2024-10-01 15:56:12.079686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.845 qpair failed and we were unable to recover it. 00:38:32.845 [2024-10-01 15:56:12.079975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.080019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.080442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.080480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.080851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.080891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.081275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.081315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.081746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.081785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.082172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.082213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.082596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.082634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.082919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.082960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.083357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.083396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.083667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.083706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.084086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.084126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.084393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.084432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.084713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.084753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.085038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.085079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.085390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.085430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.085781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.085820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.086241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.086282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.086649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.086688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.087068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.087107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.087354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.087393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.087523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.087569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.087943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.087984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.088111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.088157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.088528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.088567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.088969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.089010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.089411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.089450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.089826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.089865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.090313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.090354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.090610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.090651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.091082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.091122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.091512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.091550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.091796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.091834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.092171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.092212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.092593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.092632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.093030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.093072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.093443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.093482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.093768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.093808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.094205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.094245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.094601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.846 [2024-10-01 15:56:12.094647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.846 qpair failed and we were unable to recover it. 00:38:32.846 [2024-10-01 15:56:12.095058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.095098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.095480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.095519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.095780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.095820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.096281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.096321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.096591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.096630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.096884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.096938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.097395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.097435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.097627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.097666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.098019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.098059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.098452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.098492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.098759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.098798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.099185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.099226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.099474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.099515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.099933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.099973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.100355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.100393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.100773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.100812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.101197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.101238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.101621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.101659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.102006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.102047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.102403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.102442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.102827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.102865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.103258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.103297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.103679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.103719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.104099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.104138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.104521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.104559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.104941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.104981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.105342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.105382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.105764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.105802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.106193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.106235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.106622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.106662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.107042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.107082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.107489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.107527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.107918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.107958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.108088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.108134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.108383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.108422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.108697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.108739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.109120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.109161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.109413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.109452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.847 [2024-10-01 15:56:12.109843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.847 [2024-10-01 15:56:12.109882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.847 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.110274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.110320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.110701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.110740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.111028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.111072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.111477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.111516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.111968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.112007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.112377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.112416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.112779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.112817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.113087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.113127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.113505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.113543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.113690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.113738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.114120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.114160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.114398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.114437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.114834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.114873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.115273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.115313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.115578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.115619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.116004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.116044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.116431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.116471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.116851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.116889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.117197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.117236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.117613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.117652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.117881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.117930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.118291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.118330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.118595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.118634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.119034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.119074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.119455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.119495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.119776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.119814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.120082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.120122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.120563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.120603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.120880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.120931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.121298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.121337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.121593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.121633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.121936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.121978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.122354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.122394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.122663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.122701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.123091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.123131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.123413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.123455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.123584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.123630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.123932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.123973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.848 qpair failed and we were unable to recover it. 00:38:32.848 [2024-10-01 15:56:12.124375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.848 [2024-10-01 15:56:12.124414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.124801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.124840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.125232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.125280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.125707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.125745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.125876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.125937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.126275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.126316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.126628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.126666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.127058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.127097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.127478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.127516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.127908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.127952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.128210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.128249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.128511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.128549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.128931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.128972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.129233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.129272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.129730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.129768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.130150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.130190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.130605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.130644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.130909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.130952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.131198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.131237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.131663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.131703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.132085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.132125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.132508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.132546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.132798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.132837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.133228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.133269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.133532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.133570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.133857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.133906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.134168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.134211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.134594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.134632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.135016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.135056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.135416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.135455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.135836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.135875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.136070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.136114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.136496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.136536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.849 [2024-10-01 15:56:12.136697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.849 [2024-10-01 15:56:12.136748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.849 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.136879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.136942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.137208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.137247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.137510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.137549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.137838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.137877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.138283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.138322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.138576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.138614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.139003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.139043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.139295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.139334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.139585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.139631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.140021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.140061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.140290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.140329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.140669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.140707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.141049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.141089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.141506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.141545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.141886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.141935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.142334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.142374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.142737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.142778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.142958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.143009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.143382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.143422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.143811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.143849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.144243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.144283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.144667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.144706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.145151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.145191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.145573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.145612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.146003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.146043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.146440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.146479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.146849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.146888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.147136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.147176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.147565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.147605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.147858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.147910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.148175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.148213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.148476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.148515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.148926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.148965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.149196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.149235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.149496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.149535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.149946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.149987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.150380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.150419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.150649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.850 [2024-10-01 15:56:12.150688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.850 qpair failed and we were unable to recover it. 00:38:32.850 [2024-10-01 15:56:12.150958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.151000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.151417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.151456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.151718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.151756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.152168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.152208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.152540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.152578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.152982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.153022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.153285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.153324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.153707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.153746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.154168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.154207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.154593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.154631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.155014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.155062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.155416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.155455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.155793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.155832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.156225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.156265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.156604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.156642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.156805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.156855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.157254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.157294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.157676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.157715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.157960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.158001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.158410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.158451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.158743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.158781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.159213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.159255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.159645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.159684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.160072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.160112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.160378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.160418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.160814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.160852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.161246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.161286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.161546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.161587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.161958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.161998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.162397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.162436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.162690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.162728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.162989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.163029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.163268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.163308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.163687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.163726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.163986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.164025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.164255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.164294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.164670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.164708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.165103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.165144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.851 [2024-10-01 15:56:12.165433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.851 [2024-10-01 15:56:12.165472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.851 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.165852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.165892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.166322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.166360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.166746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.166785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.167184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.167226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.167603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.167641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.168041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.168081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.168346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.168386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.168793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.168831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.169246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.169286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.169633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.169672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.169861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.169908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.170293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.170339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.170765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.170804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.171055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.171097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.171477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.171516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.171781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.171819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.172120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.172161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.172527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.172565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.172920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.172960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.173359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.173398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.173782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.173822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.174201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.174241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.174606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.174644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.175032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.175073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.175196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.175243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.175650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.175690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.175926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.175966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.176332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.176372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.176745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.176783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.177182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.177222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.177477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.177515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.177876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.177924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.178137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.178176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.178442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.178482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.178874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.178922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.179306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.179345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.179743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.179781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.180173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.852 [2024-10-01 15:56:12.180214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.852 qpair failed and we were unable to recover it. 00:38:32.852 [2024-10-01 15:56:12.180561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.180601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.180954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.180994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.181370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.181409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.181633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.181674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.182068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.182107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.182338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.182379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.182770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.182809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.183171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.183210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.183463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.183502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.183874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.183924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.184298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.184337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.184589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.184631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.185010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.185050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.185303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.185352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.185741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.185780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.186161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.186202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.186592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.186632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.187014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.187055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.187437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.187476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.187842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.187881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.188135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.188174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.188301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.188347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.188620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.188660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.188945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.188986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.189232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.189271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.189671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.189709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.190088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.190127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.190367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.190409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.190783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.190822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.191214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.191255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.191635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.191674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.192058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.192099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.192478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.192516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.192917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.192957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.193322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.853 [2024-10-01 15:56:12.193361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.853 qpair failed and we were unable to recover it. 00:38:32.853 [2024-10-01 15:56:12.193708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.193746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.194129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.194170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.194549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.194588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.194969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.195008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.195408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.195447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.195833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.195873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.196119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.196158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.196597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.196636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.196918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.196970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.197218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.197257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.197683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.197722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.197976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.198017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.198419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.198458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.198722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.198760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.198992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.199033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.199377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.199416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.199667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.199706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.200063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.200102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.200355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.200402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.200809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.200848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.201251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.201291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.201543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.201583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.202006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.202047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.202392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.202430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.202692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.202731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.203099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.203139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.203521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.203559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.203951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.203992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.204274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.204315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.204586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.204624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.205030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.205070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.205450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.205488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.205923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.205963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.206194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.206232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.206494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.206535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.206933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.206973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.207382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.207421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.207565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.207614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.854 [2024-10-01 15:56:12.207992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.854 [2024-10-01 15:56:12.208032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.854 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.208426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.208463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.208874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.208924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.209305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.209346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.209757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.209795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.210199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.210239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.210632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.210673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.211073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.211115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.211371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.211409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.211786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.211826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.212266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.212306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.212666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.212705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.213088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.213128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.213494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.213533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.213917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.213957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.214228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.214267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.214660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.214700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.215088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.215130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.215528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.215566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.215950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.215989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.216367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.216414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.216685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.216724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.217068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.217108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.217374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.217414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.217822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.217861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.218234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.218274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.218542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.218582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.218959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.219000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.219242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.219281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.219511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.219549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.219959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.219999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.220385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.220424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.220794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.220832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.221236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.221276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.221648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.221687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.222076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.222117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.222463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.222502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.222890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.222942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.223359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.855 [2024-10-01 15:56:12.223398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.855 qpair failed and we were unable to recover it. 00:38:32.855 [2024-10-01 15:56:12.223662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.223700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.223958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.223999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.224437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.224476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.224864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.224924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.225292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.225332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.225710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.225749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.225999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.226040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.226434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.226472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.226907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.226948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.227312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.227351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.227751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.227789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.228198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.228240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.228643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.228683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.229083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.229122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.229560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.229599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.229980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.230020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.230405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.230444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.230803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.230842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.231245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.231286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.231648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.231687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.231944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.231984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.232266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.232311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.232562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.232602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.232955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.232996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.233408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.233446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.233827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.233866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.234268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.234307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.234758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.234798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.235109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.235150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.235418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.235456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.235716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.235757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.236055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.236095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.236498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.236537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.236862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.236911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.237298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.237337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.237793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.237831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.238119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.238159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.238424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.238463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.856 [2024-10-01 15:56:12.238741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.856 [2024-10-01 15:56:12.238782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.856 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.239038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.239079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.239337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.239375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.239603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.239640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.240012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.240053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.240336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.240376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.240777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.240815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.241190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.241231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.241490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.241530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.241908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.241947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.242221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.242268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.242666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.242705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.243098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.243138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.243517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.243557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.243835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.243874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.244270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.244311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.244545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.244584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.244955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.244995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.245377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.245416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.245832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.245871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.246147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.246190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.246455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.246494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.246785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.246825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.247136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.247178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.247578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.247617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.248001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.248041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.248270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.248310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.248695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.248735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.249018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.249072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.249377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.249417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.249805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.249843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.250171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.250210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.250591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.250630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.251031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.251072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.251453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.251492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.251883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.251931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.252313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.252352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.252753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.252794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.253063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.253104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.857 qpair failed and we were unable to recover it. 00:38:32.857 [2024-10-01 15:56:12.253460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.857 [2024-10-01 15:56:12.253500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.253875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.253925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.254314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.254352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.254480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.254526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.254876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.254934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.255190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.255232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.255616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.255656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.256027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.256067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.256477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.256518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.256789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.256828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.257265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.257306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.257549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.257596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.257862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.257915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.258184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.258223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.258631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.258670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.259071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.259113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.259366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.259405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.259797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.259836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.260231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.260272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.260650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.260688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.260967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.261008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.261241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.261280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.261663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.261702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.262090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.262130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.262383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.262422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.262799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.262838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.263131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.263173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.263535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.263573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.263943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.263984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.264263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.264303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.264697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.264735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.265127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.265167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.265546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.265586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.265958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.265998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.858 [2024-10-01 15:56:12.266376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.858 [2024-10-01 15:56:12.266416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.858 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.266687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.266730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.267115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.267157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.267578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.267617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.268008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.268048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.268310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.268348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.268717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.268755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.269138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.269178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.269477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.269516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.269883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.269939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.270255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.270294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.270677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.270715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.271105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.271144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.271272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.271318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.271672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.271712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.272102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.272142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.272479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.272519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.272907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.272953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.273211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.273250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.273543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.273582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.273939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.273978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.274381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.274420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.274808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.274847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.274986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.275033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.275314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.275354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.275756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.275796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.276194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.276235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.276633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.276672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.276949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.276992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.277256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.277299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.277560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.277600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.277851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.277891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.278165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.278203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.278585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.278624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.279009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.279049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.279427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.279467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.279725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.279765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.280020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.859 [2024-10-01 15:56:12.280061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:32.859 qpair failed and we were unable to recover it. 00:38:32.859 [2024-10-01 15:56:12.280420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.138 [2024-10-01 15:56:12.280458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.138 qpair failed and we were unable to recover it. 00:38:33.138 [2024-10-01 15:56:12.280873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.138 [2024-10-01 15:56:12.280936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.138 qpair failed and we were unable to recover it. 00:38:33.138 [2024-10-01 15:56:12.281219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.138 [2024-10-01 15:56:12.281258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.138 qpair failed and we were unable to recover it. 00:38:33.138 [2024-10-01 15:56:12.281677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.138 [2024-10-01 15:56:12.281717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.138 qpair failed and we were unable to recover it. 00:38:33.138 [2024-10-01 15:56:12.282112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.138 [2024-10-01 15:56:12.282152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.138 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.282535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.282574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.282965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.283004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.283404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.283443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.283696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.283734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.283993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.284033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.284295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.284336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.284598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.284640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.284934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.284973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.285341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.285380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.285616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.285655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.285917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.285955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.286331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.286370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.286596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.286637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.287030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.287068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.287430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.287477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.287892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.287945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.288366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.288405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.288835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.288875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.289297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.289336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.289734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.289774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.290177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.290218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.290487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.290525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.290784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.290823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.291218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.291258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.291645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.291682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.291936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.291977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.292334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.292373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.292754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.292792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.293054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.293095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.293245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.293295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.293668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.293707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.139 qpair failed and we were unable to recover it. 00:38:33.139 [2024-10-01 15:56:12.294109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.139 [2024-10-01 15:56:12.294149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.294531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.294572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.294949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.294988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.295358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.295397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.295649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.295691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.295921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.295960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.296378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.296417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.296797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.296836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.297130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.297171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.297525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.297563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.297883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.297964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.298351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.298391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.298680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.298718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.298971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.299011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.299308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.299348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.299599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.299638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.300001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.300042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.300411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.300450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.300823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.300861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.301251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.301291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.301676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.301715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.302101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.302142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.302523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.302562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.302825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.302878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.303281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.303320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.303586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.303624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.303868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.303922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.304193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.304233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.304640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.304680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.304937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.140 [2024-10-01 15:56:12.304977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.140 qpair failed and we were unable to recover it. 00:38:33.140 [2024-10-01 15:56:12.305206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.305245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.305486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.305524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.305926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.305965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.306340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.306379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.306725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.306763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.307117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.307158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.307533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.307571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.307971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.308011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.308274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.308313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.308738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.308776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.309215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.309256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.309645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.309685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.309949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.309988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.310251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.310290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.310627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.310666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.310902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.310942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.311322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.311361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.311735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.311774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.312032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.312075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.312466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.312505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.312925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.312966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.313344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.313384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.313787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.313827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.314223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.314263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.314645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.314684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.314830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.314879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.315259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.315299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.315681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.315720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.315979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.316019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.316413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.316451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.316715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.316754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.317138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.317179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.141 qpair failed and we were unable to recover it. 00:38:33.141 [2024-10-01 15:56:12.317527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.141 [2024-10-01 15:56:12.317567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.317983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.318030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.318409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.318449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.318831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.318869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.319258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.319297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.319677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.319716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.320110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.320151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.320410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.320451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.320820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.320859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.321150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.321192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.321521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.321560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.321952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.321992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.322387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.322426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.322810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.322849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.323230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.323269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.323576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.323615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.323987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.324027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.324408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.324446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.324824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.324862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.325267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.325308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.325692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.325732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.326119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.326160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.326559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.326597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.326824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.326863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.327264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.327303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.327686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.327724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.327976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.328016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.328464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.328503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.142 [2024-10-01 15:56:12.328768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.142 [2024-10-01 15:56:12.328809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.142 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.329071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.329111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.329506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.329546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.329787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.329826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.330240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.330279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.330675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.330715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.330982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.331023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.331427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.331465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.331693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.331732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.332184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.332225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.332514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.332554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.332850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.332890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.333285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.333325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.333581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.333627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.334034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.334075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.334461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.334501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.334629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.334675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.335044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.335084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.335212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.335258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.335688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.335727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.336068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.336108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.336494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.336534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.336946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.336986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.337114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.337161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.337553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.337593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.337872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.337926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.338324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.338363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.338758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.338798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.339188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.339229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.339622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.339661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.143 qpair failed and we were unable to recover it. 00:38:33.143 [2024-10-01 15:56:12.340056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.143 [2024-10-01 15:56:12.340096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.340406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.340445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.340834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.340873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.341287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.341327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.341570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.341609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.342023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.342063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.342321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.342361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.342743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.342782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.343182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.343224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.343609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.343648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.343918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.343959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.344331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.344370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.344494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.344540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.344812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.344852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.345115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.345156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.345515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.345555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.345930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.345969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.346364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.346403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.346654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.346694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.347055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.347094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.347474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.347513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.347905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.347945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.348351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.348390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.348760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.348806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.349194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.349235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.349469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.349507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.349856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.349924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.350310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.144 [2024-10-01 15:56:12.350349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.144 qpair failed and we were unable to recover it. 00:38:33.144 [2024-10-01 15:56:12.350601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.350640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.351002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.351043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.351429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.351467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.351728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.351767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.351972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.352013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.352373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.352413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.352642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.352681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.352936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.352975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.353366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.353406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.353798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.353838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.354269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.354310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.354565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.354605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.354848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.354888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.355263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.355303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.355581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.355621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.356019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.356059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.356416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.356455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.356837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.356876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.357326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.357365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.357742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.357781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.358036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.358076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.358442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.358481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.358861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.358923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.359355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.359394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.359782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.359820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.360216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.360256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.360513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.360551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.360991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.361031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.361417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.361456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.361845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.361884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.362313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.362353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.362738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.362777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.363184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.363224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.145 qpair failed and we were unable to recover it. 00:38:33.145 [2024-10-01 15:56:12.363622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.145 [2024-10-01 15:56:12.363661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.364064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.364104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.364388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.364434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.364830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.364869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.365272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.365311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.365547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.365586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.365982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.366022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.366405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.366444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.366711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.366751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.367125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.367165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.367555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.367594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.367977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.368017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.368255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.368296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.368554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.368593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.368990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.369029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.369409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.369449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.369810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.369850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.370305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.370346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.370615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.370653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.371062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.371102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.371492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.371531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.371927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.371966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.372138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.372187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.372573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.372614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.372977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.373016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.373389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.373428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.373717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.373759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.374136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.374175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.374569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.374608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.374962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.375004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.375391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.375429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.375830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.375868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.376042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.376091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.376342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.376384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.376818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.376856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.377258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.377298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.377651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.377690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.146 qpair failed and we were unable to recover it. 00:38:33.146 [2024-10-01 15:56:12.377930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.146 [2024-10-01 15:56:12.377969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.378225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.378263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.378608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.378646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.378915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.378957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.379367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.379406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.379805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.379859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.380287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.380327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.380653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.380692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.380951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.380992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.381344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.381383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.381674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.381714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.382116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.382156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.382538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.382578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.382834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.382872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.383316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.383356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.383633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.383672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.383926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.383966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.384335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.384374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.384740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.384780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.385157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.385198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.385569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.385609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.385869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.385917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.386308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.386347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.386715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.386754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.387012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.387053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.387421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.387461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.387850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.387888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.388312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.388351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.388738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.388777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.389074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.389116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.389546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.389584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.389979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.390019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.390354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.390394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.390689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.390727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.391122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.391162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.391531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.391569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.147 qpair failed and we were unable to recover it. 00:38:33.147 [2024-10-01 15:56:12.391948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.147 [2024-10-01 15:56:12.391988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.392364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.392402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.392659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.392697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.393083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.393123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.393380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.393420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.393831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.393870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.394276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.394316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.394693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.394731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.395118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.395158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.395411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.395457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.395842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.395881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.396153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.396192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.396558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.396596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.396866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.396919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.397186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.397225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.397609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.397647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.398033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.398075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.398489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.398528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.398785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.398822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.399204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.399245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.399633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.399672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.399834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.399885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.400168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.400208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.400469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.400509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.400876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.400927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.401303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.401343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.401729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.401768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.402157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.402198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.402381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.402421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.402677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.402715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.403000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.403042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.403420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.403458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.403840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.403879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.404289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.404328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.404709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.404747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.405129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.405170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.405436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.405476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.405626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.405672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.148 qpair failed and we were unable to recover it. 00:38:33.148 [2024-10-01 15:56:12.406053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.148 [2024-10-01 15:56:12.406094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.406527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.406566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.406949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.406989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.407221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.407260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.407662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.407701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.407932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.407972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.408344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.408384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.408638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.408677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.409067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.409106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.409471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.409510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.409890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.409942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.410081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.410140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.410526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.410566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.410817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.410856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.411247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.411287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.411665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.411703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.412096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.412136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.412531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.412571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.412961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.413001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.413401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.413440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.413564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.413610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.413977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.414017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.414373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.414412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.414793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.414832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.415114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.415155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.415575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.415615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.416000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.416040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.416170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.416215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.149 [2024-10-01 15:56:12.416583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.149 [2024-10-01 15:56:12.416622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.149 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.417010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.417050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.417430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.417469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.417733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.417773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.418059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.418099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.418462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.418500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.418881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.418930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.419354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.419393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.419775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.419813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.420214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.420255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.420534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.420575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.420843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.420883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.421125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.421165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.421547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.421586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.421843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.421884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.422322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.422361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.422754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.422793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.423193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.423236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.423421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.423459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.423842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.423881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.424289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.424328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.424583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.424622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.425004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.425044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.425322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.425371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.425534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.425586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.425850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.425889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.426180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.426220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.426558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.426597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.426935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.426975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.427356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.427395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.427799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.427838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.428229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.428270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.428660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.428699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.428960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.429000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.429268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.429307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.429561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.429600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.429993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.430034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.430283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.430324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.430674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.150 [2024-10-01 15:56:12.430713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.150 qpair failed and we were unable to recover it. 00:38:33.150 [2024-10-01 15:56:12.430976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.431017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.431407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.431446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.431841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.431880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.432276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.432315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.432718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.432757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.433051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.433092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.433481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.433519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.433904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.433945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.434292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.434331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.434588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.434629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.435020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.435061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.435457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.435498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.435880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.435928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.436187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.436227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.436491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.436530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.436933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.436972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.437391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.437431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.437781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.437821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.438213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.438254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.438471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.438510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.438736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.438774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.439179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.439221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.439609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.439649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.439907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.439946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.440346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.440384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.440778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.440818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.441107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.441146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.441402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.441441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.441704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.441743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.442130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.442170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.442433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.442472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.442713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.442752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.443059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.443099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.443507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.443546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.443944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.443984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.444115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.444161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.444594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.444634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.444948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.444988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.151 [2024-10-01 15:56:12.445232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.151 [2024-10-01 15:56:12.445272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.151 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.445626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.445665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.446046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.446085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.446435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.446473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.446619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.446668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.447051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.447093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.447346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.447385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.447768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.447808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.448201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.448242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.448628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.448666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.449019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.449058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.449445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.449483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.449863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.450050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.450458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.450506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.450853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.450892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.451276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.451314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.451678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.451717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.452101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.452142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.452522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.452562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.452824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.452863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.453255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.453295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.453554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.453592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.453965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.454005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.454371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.454409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.454809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.454849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.455236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.455277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.455688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.455727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.456153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.456195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.456457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.456496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.456909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.456949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.457334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.457373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.457524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.457573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.457956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.457996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.458249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.458290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.458671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.458711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.459099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.459139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.459516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.459554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.459817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.459856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.152 [2024-10-01 15:56:12.460105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.152 [2024-10-01 15:56:12.460144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.152 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.460523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.460562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.460954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.460997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.461367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.461406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.461664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.461702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.461949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.461990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.462384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.462421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.462808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.462847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.463236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.463276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.463701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.463741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.464036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.464078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.464471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.464510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.464750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.464788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.465162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.465203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.465476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.465515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.465962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.466010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.466395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.466435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.466863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.466913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.467304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.467344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.467737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.467776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.468017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.468058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.468448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.468487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.468870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.468919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.469314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.469354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.469608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.469647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.470037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.470078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.470458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.470497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.470749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.470790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.470920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.470969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.471380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.471420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.471674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.471714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.471990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.472030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.472280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.472320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.472580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.472618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.473011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.153 [2024-10-01 15:56:12.473052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.153 qpair failed and we were unable to recover it. 00:38:33.153 [2024-10-01 15:56:12.473452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.473491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.473877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.473944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.474251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.474291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.474674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.474713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.474970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.475011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.475301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.475342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.475691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.475730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.476095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.476135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.476519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.476559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.476780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.476819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.477085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.477125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.477503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.477542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.477936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.477976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.478349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.478388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.478642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.478681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.479083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.479124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.479521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.479561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.479833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.479872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.480116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.480155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.480513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.480552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.480684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.480738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.481105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.481146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.481553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.481594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.481849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.481888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.482297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.482337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.482577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.482616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.482974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.483014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.483249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.483288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.483569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.483610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.484019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.484059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.484465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.484505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.484885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.484940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.485208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.154 [2024-10-01 15:56:12.485248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.154 qpair failed and we were unable to recover it. 00:38:33.154 [2024-10-01 15:56:12.485533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.485571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.485948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.485989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.486396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.486436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.486694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.486733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.487125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.487166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.487299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.487345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.487717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.487757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.488026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.488067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.488554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.488594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.489058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.489100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.489393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.489434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.489823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.489862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.490256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.490295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.490688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.490727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.491002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.491042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.491305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.491345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.491712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.491751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.492101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.492141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.492373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.492412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.492851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.492890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.493308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.493347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.493742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.493780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.494049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.494091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.494350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.494389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.494647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.494686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.494944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.494986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.495267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.495308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.495559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.495605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.495985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.496024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.496383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.496424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.496818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.496857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.497255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.497294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.497667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.497705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.497974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.498014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.498427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.498465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.498832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.498872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.499263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.499304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.499747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.499785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.155 [2024-10-01 15:56:12.500216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.155 [2024-10-01 15:56:12.500258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.155 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.500694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.500734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.501141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.501180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.501572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.501612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.501959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.501999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.502391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.502430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.502686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.502727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.503065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.503105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.503459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.503498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.503883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.503935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.504356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.504395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.504744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.504783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.505175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.505217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.505602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.505641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.506026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.506066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.506308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.506348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.506708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.506746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.507123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.507163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.507413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.507453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.507858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.507906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.508289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.508328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.508721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.508761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.509130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.509170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.509562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.509600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.509865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.509937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.510323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.510364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.510753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.510795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.511195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.511236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.511454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.511493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.511754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.511800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.512054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.512095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.512459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.512499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.512871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.512921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.513311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.513350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.513728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.513767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.514021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.514062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.514456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.514494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.514874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.514924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.515159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.156 [2024-10-01 15:56:12.515199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.156 qpair failed and we were unable to recover it. 00:38:33.156 [2024-10-01 15:56:12.515351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.515400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.515833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.515873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.516171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.516212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.516617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.516657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.517042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.517082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.517286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.517325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.517583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.517623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.517992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.518033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.518416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.518454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.518831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.518869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.519126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.519165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.519512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.519551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.519919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.519958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.520330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.520369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.520752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.520790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.521230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.521270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.521652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.521691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.521971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.522012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.522384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.522423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.522677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.522717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.523087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.523128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.523476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.523517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.523846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.523884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.524283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.524323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.524772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.524812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.525191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.525231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.525661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.525700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.526097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.526138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.526529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.526567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.526931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.526971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.527351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.527397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.527781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.527820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.528211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.528251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.528638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.528678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.529066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.529105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.529497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.529535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.529776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.529814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.530066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.530106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.157 [2024-10-01 15:56:12.530496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.157 [2024-10-01 15:56:12.530535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.157 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.530794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.530832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.531103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.531144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.531387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.531426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.531681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.531720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.532101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.532142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.532508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.532549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.532945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.532984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.533227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.533267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.533527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.533567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.533858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.533905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.534166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.534205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.534571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.534610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.534997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.535036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.535418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.535456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.535712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.535751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.536124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.536165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.536426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.536466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.536869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.536916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.537199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.537239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.537633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.537672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.538072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.538112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.538350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.538390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.538759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.538796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.539062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.539101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.539497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.539535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.539936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.539976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.540236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.540274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.540526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.540565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.540932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.540971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.541345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.541383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.541634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.541673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.542086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.542131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.542413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.542451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.542713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.158 [2024-10-01 15:56:12.542752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.158 qpair failed and we were unable to recover it. 00:38:33.158 [2024-10-01 15:56:12.543245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.543286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.543544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.543583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.543987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.544026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.544297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.544336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.544725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.544764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.545175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.545215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.545472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.545510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.545926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.545967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.546234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.546273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.546536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.546574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.546970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.547011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.547421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.547460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.547713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.547752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.548010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.548050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.548311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.548350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.548496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.548546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.548932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.548974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.549220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.549258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.549521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.549560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.549832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.549871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.550252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.550292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.550442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.550491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.550786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.550825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.551222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.551263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.551659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.551698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.552086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.552125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.552387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.552425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.552831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.552870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.553270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.553308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.553563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.553602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.553856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.553907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.554301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.554339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.554597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.554639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.555024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.555065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.555320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.555359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.555762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.555801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.556191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.556232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.556616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.556671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.159 [2024-10-01 15:56:12.557051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.159 [2024-10-01 15:56:12.557091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.159 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.557482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.557521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.557914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.557952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.558389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.558429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.558850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.558889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.559172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.559211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.559572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.559611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.559994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.560033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.560312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.560350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.560602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.560641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.561029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.561069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.561454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.561494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.561934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.561974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.562246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.562287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.562553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.562593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.562981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.563021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.563423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.563462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.563746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.563785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.564232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.564273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.564704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.564743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.564990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.565031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.565386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.565426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.565813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.565852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.566243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.566283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.566630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.566668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.566846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.566889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.567153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.567193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:33.160 [2024-10-01 15:56:12.567592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.567630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:38:33.160 [2024-10-01 15:56:12.568031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.568071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:33.160 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:33.160 [2024-10-01 15:56:12.568479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.568517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.160 [2024-10-01 15:56:12.568936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.568977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.569333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.569373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.569631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.569672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.570034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.570074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.570512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.570550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.570929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.570969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.571338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.160 [2024-10-01 15:56:12.571377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.160 qpair failed and we were unable to recover it. 00:38:33.160 [2024-10-01 15:56:12.571605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.571652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.572020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.572058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.572428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.572465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.572604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.572641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.573006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.573044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.573400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.573435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.573827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.573863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.574284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.574321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.574707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.574743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.575097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.575134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.575375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.575410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.161 [2024-10-01 15:56:12.575670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.161 [2024-10-01 15:56:12.575705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.161 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.575954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.575991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.576245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.576281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.576695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.576731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.576987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.577024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.577387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.577423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.577804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.577840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.578114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.578151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.578530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.578565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.578824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.578863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.579273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.579311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.579727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.579763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.580159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.580196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.580482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.580519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.580743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.580777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.581137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.581174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.581439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.581475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.581872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.581916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.582129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.582166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.582564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.582601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.582998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.583034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.583435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.583471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.583688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.583724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.584119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.584156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.584524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.584560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.584948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.584984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.585370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.585406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.585652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.585687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.429 qpair failed and we were unable to recover it. 00:38:33.429 [2024-10-01 15:56:12.586065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.429 [2024-10-01 15:56:12.586101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.586326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.586368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.586747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.586782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.587029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.587065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.587285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.587321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.587541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.587576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.587954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.587991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.588228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.588263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.588500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.588535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.588791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.588826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.589258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.589296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.589688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.589725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.590108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.590145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.590538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.590573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.590843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.590880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.591310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.591346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.591460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.591494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.591926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.591964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.592325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.592361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.592740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.592775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.593156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.593192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.593540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.593577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.593829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.593864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.594156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.594192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.594538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.594573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.594930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.594967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.595352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.595388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.595780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.595815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.596098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.596136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.596520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.596555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.596934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.596970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.597206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.597243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.597625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.597660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.598040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.598076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.598330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.598369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.598765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.598802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.599069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.599106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.599349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.599384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.430 [2024-10-01 15:56:12.599775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.430 [2024-10-01 15:56:12.599811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.430 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.600209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.600247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.600630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.600666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.601044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.601087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.601374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.601410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.601807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.601842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.602220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.602256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.602426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.602464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.602580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.602615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.603021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.603059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.603446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.603483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.603860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.603925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.604301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.604339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.604723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.604760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.605130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.605167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.605593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.605629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.606008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.606044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.606433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.606469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.606853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.606890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.607167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.607203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.607575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.607611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.607992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.608028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.608267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.608303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:33.431 [2024-10-01 15:56:12.608680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.608716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:33.431 [2024-10-01 15:56:12.609030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.609067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.431 [2024-10-01 15:56:12.609355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.609392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.431 [2024-10-01 15:56:12.609795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.609830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.610177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.610213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.610593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.610630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.611000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.611036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.611409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.611445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.611838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.611874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.612182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.612219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.612490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.612525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.612882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.612931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.613189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.613224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.613624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.431 [2024-10-01 15:56:12.613660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.431 qpair failed and we were unable to recover it. 00:38:33.431 [2024-10-01 15:56:12.613913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.613949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.614364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.614400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.614574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.614612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.614974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.615011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.615397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.615439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.615790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.615825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.616084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.616124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.616368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.616404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.616795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.616830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.617223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.617259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.617398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.617432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.617691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.617728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.617961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.617997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.618281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.618316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.618729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.618764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.619139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.619176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.619514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.619549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.619944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.619981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.620388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.620425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.620679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.620714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.620925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.620961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.621345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.621381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.621599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.621635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.621864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.621911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.622301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.622336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.622727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.622762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.623016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.623053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.623464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.623500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.623883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.623929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.624315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.624352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.624768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.624804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.625199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.625238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 Malloc0 00:38:33.432 [2024-10-01 15:56:12.625684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.625720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.432 [2024-10-01 15:56:12.626103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.626139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.626420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.626454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:33.432 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.432 [2024-10-01 15:56:12.626818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.626853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.432 [2024-10-01 15:56:12.627109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.432 [2024-10-01 15:56:12.627146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.432 qpair failed and we were unable to recover it. 00:38:33.432 [2024-10-01 15:56:12.627497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.627532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.627698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.627733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.627879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.627932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.628294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.628330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.628718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.628754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.629213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.629250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.629638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.629675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.629919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.629956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.630364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.630399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.630779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.630815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.631202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.631239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.631631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.631666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.632071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.632107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.632485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.632521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.632690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:33.433 [2024-10-01 15:56:12.632835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.632877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.633297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.633335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.633703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.633739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.633996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.634033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.634392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.634427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.634742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.634779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.635142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.635179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.635559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.635595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.635941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.635977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.636107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.636144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.636524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.636559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.636824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.636862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.637239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.637275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.637647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.637682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.638066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.638102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.638461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.638497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.638835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.638870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.639274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.433 [2024-10-01 15:56:12.639311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.433 qpair failed and we were unable to recover it. 00:38:33.433 [2024-10-01 15:56:12.639589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.639628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.640026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.640064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.640460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.640497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.640786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.640820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.641199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.641235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.641590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.641625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.434 [2024-10-01 15:56:12.641877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.641924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.642087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.642125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:33.434 [2024-10-01 15:56:12.642355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.642391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.434 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.434 [2024-10-01 15:56:12.642761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.642796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.643063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.643100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.643450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.643492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.643751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.643787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.644011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.644047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.644432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.644467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.644839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.644875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.645266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.645302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.645681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.645717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.646118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.646157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.646565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.646601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.646986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.647024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.647384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.647419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.647824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.647860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.648262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.648298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.648677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.648712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.649110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.649148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.649407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.649443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.649801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.649835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.650202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.650240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.650589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.650627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.651012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.651050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.651441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.651477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.651867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.651915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.652098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.652134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.652536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.652572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.652819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.652855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.434 qpair failed and we were unable to recover it. 00:38:33.434 [2024-10-01 15:56:12.653120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.434 [2024-10-01 15:56:12.653156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.653545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.653580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.435 [2024-10-01 15:56:12.653974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.654012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:33.435 [2024-10-01 15:56:12.654390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.654426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.435 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.435 [2024-10-01 15:56:12.654841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.654877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.655355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.655391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.655772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.655808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.656167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.656204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.656574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.656608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.656999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.657037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.657289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.657324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.657709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.657744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.658008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.658045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.658428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.658463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.658875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.658919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.659180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.659216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.659340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.659374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.659750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.659786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.659953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.659988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.660429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.660465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.660849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.660884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.661279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.661315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.661677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.661713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.661987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.662024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.662260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.662295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.662671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.662708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.663058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.663095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.663483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.663519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.663770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.663808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.664086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.664127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.664350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.664387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.664650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.664687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.665071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.665109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.665493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.665529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.435 [2024-10-01 15:56:12.665795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.665835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 [2024-10-01 15:56:12.666118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.435 [2024-10-01 15:56:12.666158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.435 qpair failed and we were unable to recover it. 00:38:33.435 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.436 [2024-10-01 15:56:12.666536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.666572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.436 [2024-10-01 15:56:12.666934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.666973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.667344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.667389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.667635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.667670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.667936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.667973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.668390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.668426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.668643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.668679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.669066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.669103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.669524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.669561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.669823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.669859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.670280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.670317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.670443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.670477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.670723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.670759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.671001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.671038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.671394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.671430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.671789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.671825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.672085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.672122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.672485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.672521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.672910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.436 [2024-10-01 15:56:12.672947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23a4000b90 with addr=10.0.0.2, port=4420 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.672994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.436 [2024-10-01 15:56:12.683798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.436 [2024-10-01 15:56:12.683920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.436 [2024-10-01 15:56:12.683959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.436 [2024-10-01 15:56:12.683985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.436 [2024-10-01 15:56:12.684013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.436 [2024-10-01 15:56:12.684073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.436 15:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3410920 00:38:33.436 [2024-10-01 15:56:12.693620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.436 [2024-10-01 15:56:12.693706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.436 [2024-10-01 15:56:12.693730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.436 [2024-10-01 15:56:12.693747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.436 [2024-10-01 15:56:12.693765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.436 [2024-10-01 15:56:12.693800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.703580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.436 [2024-10-01 15:56:12.703654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.436 [2024-10-01 15:56:12.703684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.436 [2024-10-01 15:56:12.703701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.436 [2024-10-01 15:56:12.703716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.436 [2024-10-01 15:56:12.703751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.713497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.436 [2024-10-01 15:56:12.713564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.436 [2024-10-01 15:56:12.713581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.436 [2024-10-01 15:56:12.713593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.436 [2024-10-01 15:56:12.713602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.436 [2024-10-01 15:56:12.713624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.723582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.436 [2024-10-01 15:56:12.723659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.436 [2024-10-01 15:56:12.723684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.436 [2024-10-01 15:56:12.723696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.436 [2024-10-01 15:56:12.723707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.436 [2024-10-01 15:56:12.723732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.436 qpair failed and we were unable to recover it. 00:38:33.436 [2024-10-01 15:56:12.733493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.436 [2024-10-01 15:56:12.733555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.733572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.733579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.733585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.733600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.743585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.743681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.743695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.743702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.743708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.743727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.753667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.753718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.753732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.753738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.753745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.753759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.763724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.763783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.763796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.763803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.763809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.763823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.773758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.773851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.773865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.773872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.773878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.773892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.783727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.783786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.783799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.783806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.783812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.783827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.793748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.793801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.793819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.793826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.793832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.793846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.803810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.803866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.803880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.803887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.803897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.803912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.813747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.813806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.813819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.813826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.813832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.813846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.823916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.823973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.823986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.823993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.823999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.824013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.833858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.833912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.437 [2024-10-01 15:56:12.833926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.437 [2024-10-01 15:56:12.833933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.437 [2024-10-01 15:56:12.833939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.437 [2024-10-01 15:56:12.833957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.437 qpair failed and we were unable to recover it. 00:38:33.437 [2024-10-01 15:56:12.843913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.437 [2024-10-01 15:56:12.843989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.438 [2024-10-01 15:56:12.844002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.438 [2024-10-01 15:56:12.844009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.438 [2024-10-01 15:56:12.844015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.438 [2024-10-01 15:56:12.844029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.438 qpair failed and we were unable to recover it. 00:38:33.438 [2024-10-01 15:56:12.853941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.438 [2024-10-01 15:56:12.853997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.438 [2024-10-01 15:56:12.854011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.438 [2024-10-01 15:56:12.854017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.438 [2024-10-01 15:56:12.854024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.438 [2024-10-01 15:56:12.854038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.438 qpair failed and we were unable to recover it. 00:38:33.438 [2024-10-01 15:56:12.863986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.438 [2024-10-01 15:56:12.864045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.438 [2024-10-01 15:56:12.864059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.438 [2024-10-01 15:56:12.864065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.438 [2024-10-01 15:56:12.864072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.438 [2024-10-01 15:56:12.864085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.438 qpair failed and we were unable to recover it. 00:38:33.438 [2024-10-01 15:56:12.873973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.438 [2024-10-01 15:56:12.874037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.438 [2024-10-01 15:56:12.874051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.438 [2024-10-01 15:56:12.874058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.438 [2024-10-01 15:56:12.874064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.438 [2024-10-01 15:56:12.874078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.438 qpair failed and we were unable to recover it. 00:38:33.700 [2024-10-01 15:56:12.884044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.700 [2024-10-01 15:56:12.884101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.700 [2024-10-01 15:56:12.884115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.700 [2024-10-01 15:56:12.884122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.700 [2024-10-01 15:56:12.884128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.700 [2024-10-01 15:56:12.884142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-10-01 15:56:12.894033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.700 [2024-10-01 15:56:12.894122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.700 [2024-10-01 15:56:12.894135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.700 [2024-10-01 15:56:12.894142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.700 [2024-10-01 15:56:12.894148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.700 [2024-10-01 15:56:12.894162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-10-01 15:56:12.904125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.700 [2024-10-01 15:56:12.904181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.700 [2024-10-01 15:56:12.904194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.700 [2024-10-01 15:56:12.904201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.700 [2024-10-01 15:56:12.904208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.700 [2024-10-01 15:56:12.904221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-10-01 15:56:12.914036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.700 [2024-10-01 15:56:12.914125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.700 [2024-10-01 15:56:12.914138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.700 [2024-10-01 15:56:12.914145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.700 [2024-10-01 15:56:12.914151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.700 [2024-10-01 15:56:12.914165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-10-01 15:56:12.924188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.700 [2024-10-01 15:56:12.924245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.924258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.924265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.924274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.924289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:12.934032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:12.934085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.934098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.934104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.934111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.934125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:12.944175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:12.944231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.944244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.944251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.944257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.944271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:12.954221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:12.954306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.954319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.954326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.954332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.954346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:12.964263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:12.964318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.964331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.964337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.964343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.964357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:12.974268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:12.974326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.974339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.974346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.974352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.974366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:12.984300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:12.984348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.984361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.984368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.984374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.984388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:12.994328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:12.994384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:12.994397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:12.994404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:12.994410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:12.994424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:13.004254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:13.004321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:13.004334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:13.004341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:13.004347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:13.004361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:13.014391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:13.014444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:13.014458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:13.014469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:13.014475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:13.014490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:13.024411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:13.024507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:13.024520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:13.024527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:13.024533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:13.024547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:13.034447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:13.034538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:13.034552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:13.034559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:13.034565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:13.034579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:13.044487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.701 [2024-10-01 15:56:13.044543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.701 [2024-10-01 15:56:13.044556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.701 [2024-10-01 15:56:13.044563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.701 [2024-10-01 15:56:13.044569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.701 [2024-10-01 15:56:13.044583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-10-01 15:56:13.054485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.054539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.054553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.054559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.054565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.054579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.064534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.064587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.064601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.064608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.064615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.064628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.074564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.074621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.074639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.074646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.074652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.074668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.084596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.084669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.084684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.084691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.084697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.084713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.094614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.094671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.094698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.094707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.094714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.094734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.104645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.104707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.104735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.104748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.104755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.104777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.114681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.114743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.114760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.114768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.114774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.114791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.124715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.124781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.124797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.124804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.124810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.124826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.134723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.134775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.134791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.134798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.134804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.134819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-10-01 15:56:13.144759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.702 [2024-10-01 15:56:13.144819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.702 [2024-10-01 15:56:13.144834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.702 [2024-10-01 15:56:13.144841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.702 [2024-10-01 15:56:13.144848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.702 [2024-10-01 15:56:13.144864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.154818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.154886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.154907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.154914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.154920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.154936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.164925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.164996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.165013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.165020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.165027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.165043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.174854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.174918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.174937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.174944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.174951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.174968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.184917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.185014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.185033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.185040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.185047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.185065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.194973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.195044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.195067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.195074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.195081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.195098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.205027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.205104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.205123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.205130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.205136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.205153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.215007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.215078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.215095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.215102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.215108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.215124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.224969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.225057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.225074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.225082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.225088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.225105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.235083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.235153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.235170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.235178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.235184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.235206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.245044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.245120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.245137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.245145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.245151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.245167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.255148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.255211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.255228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.255236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.966 [2024-10-01 15:56:13.255242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.966 [2024-10-01 15:56:13.255258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.966 qpair failed and we were unable to recover it. 00:38:33.966 [2024-10-01 15:56:13.265164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.966 [2024-10-01 15:56:13.265228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.966 [2024-10-01 15:56:13.265246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.966 [2024-10-01 15:56:13.265253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.265259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.265276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.275171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.275233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.275251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.275259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.275265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.275281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.285149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.285214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.285241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.285248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.285254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.285273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.295283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.295360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.295381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.295388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.295395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.295412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.305170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.305235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.305255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.305262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.305268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.305286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.315331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.315437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.315454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.315461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.315467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.315483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.325383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.325460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.325478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.325485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.325491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.325513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.335260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.335322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.335341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.335348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.335354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.335370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.345413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.345478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.345495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.345502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.345509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.345525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.355414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.355485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.355502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.355510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.355516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.355532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.365503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.365571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.365589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.365596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.365602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.365619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.375381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.375445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.375471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.375479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.375485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.375508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.385536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.385604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.385622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.385629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.385635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.385651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.395573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.967 [2024-10-01 15:56:13.395637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.967 [2024-10-01 15:56:13.395655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.967 [2024-10-01 15:56:13.395662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.967 [2024-10-01 15:56:13.395668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.967 [2024-10-01 15:56:13.395684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.967 qpair failed and we were unable to recover it. 00:38:33.967 [2024-10-01 15:56:13.405691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.968 [2024-10-01 15:56:13.405768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.968 [2024-10-01 15:56:13.405804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.968 [2024-10-01 15:56:13.405814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.968 [2024-10-01 15:56:13.405821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.968 [2024-10-01 15:56:13.405847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.968 qpair failed and we were unable to recover it. 00:38:33.968 [2024-10-01 15:56:13.415517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.968 [2024-10-01 15:56:13.415588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.968 [2024-10-01 15:56:13.415608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.968 [2024-10-01 15:56:13.415616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.968 [2024-10-01 15:56:13.415631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:33.968 [2024-10-01 15:56:13.415649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:33.968 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.425661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.425732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.425751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.425758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.231 [2024-10-01 15:56:13.425765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.231 [2024-10-01 15:56:13.425781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.231 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.435718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.435788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.435807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.435814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.231 [2024-10-01 15:56:13.435820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.231 [2024-10-01 15:56:13.435838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.231 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.445769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.445840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.445858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.445865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.231 [2024-10-01 15:56:13.445871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.231 [2024-10-01 15:56:13.445888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.231 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.455788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.455846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.455864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.455871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.231 [2024-10-01 15:56:13.455877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.231 [2024-10-01 15:56:13.455899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.231 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.465782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.465845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.465864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.465872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.231 [2024-10-01 15:56:13.465878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.231 [2024-10-01 15:56:13.465899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.231 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.475891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.475982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.476003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.476010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.231 [2024-10-01 15:56:13.476017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.231 [2024-10-01 15:56:13.476036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.231 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.485884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.486002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.486020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.486027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.231 [2024-10-01 15:56:13.486033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.231 [2024-10-01 15:56:13.486050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.231 qpair failed and we were unable to recover it. 00:38:34.231 [2024-10-01 15:56:13.495901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.231 [2024-10-01 15:56:13.495976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.231 [2024-10-01 15:56:13.495994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.231 [2024-10-01 15:56:13.496001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.496008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.496024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.505927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.505992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.506022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.506030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.506050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.506066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.515959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.516024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.516041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.516049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.516055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.516071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.526017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.526086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.526104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.526112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.526118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.526135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.535969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.536034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.536052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.536060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.536066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.536083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.545891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.545963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.545981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.545988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.545994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.546011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.556053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.556120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.556138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.556146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.556152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.556169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.566166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.566271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.566289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.566296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.566302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.566319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.575998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.576071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.576089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.576096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.576103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.576119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.586153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.586230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.586247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.586255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.586261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.586277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.596203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.596299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.596317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.596330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.596338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.596354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.606243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.606363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.606383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.606390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.606397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.606413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.616264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.616337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.616354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.616361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.616368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.616385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.626245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.232 [2024-10-01 15:56:13.626326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.232 [2024-10-01 15:56:13.626380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.232 [2024-10-01 15:56:13.626393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.232 [2024-10-01 15:56:13.626400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.232 [2024-10-01 15:56:13.626431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.232 qpair failed and we were unable to recover it. 00:38:34.232 [2024-10-01 15:56:13.636341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.233 [2024-10-01 15:56:13.636418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.233 [2024-10-01 15:56:13.636438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.233 [2024-10-01 15:56:13.636446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.233 [2024-10-01 15:56:13.636453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.233 [2024-10-01 15:56:13.636471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.233 qpair failed and we were unable to recover it. 00:38:34.233 [2024-10-01 15:56:13.646380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.233 [2024-10-01 15:56:13.646458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.233 [2024-10-01 15:56:13.646477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.233 [2024-10-01 15:56:13.646484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.233 [2024-10-01 15:56:13.646491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.233 [2024-10-01 15:56:13.646508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.233 qpair failed and we were unable to recover it. 00:38:34.233 [2024-10-01 15:56:13.656368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.233 [2024-10-01 15:56:13.656461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.233 [2024-10-01 15:56:13.656480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.233 [2024-10-01 15:56:13.656487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.233 [2024-10-01 15:56:13.656493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.233 [2024-10-01 15:56:13.656510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.233 qpair failed and we were unable to recover it. 00:38:34.233 [2024-10-01 15:56:13.666248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.233 [2024-10-01 15:56:13.666313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.233 [2024-10-01 15:56:13.666331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.233 [2024-10-01 15:56:13.666338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.233 [2024-10-01 15:56:13.666344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.233 [2024-10-01 15:56:13.666360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.233 qpair failed and we were unable to recover it. 00:38:34.233 [2024-10-01 15:56:13.676429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.233 [2024-10-01 15:56:13.676512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.233 [2024-10-01 15:56:13.676531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.233 [2024-10-01 15:56:13.676538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.233 [2024-10-01 15:56:13.676544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.233 [2024-10-01 15:56:13.676561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.233 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.686496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.686566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.686591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.686598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.686605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.686621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.696498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.696565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.696584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.696591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.696597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.696614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.706476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.706542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.706562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.706570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.706578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.706594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.716421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.716531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.716550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.716557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.716564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.716581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.726597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.726661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.726679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.726686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.726693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.726710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.736596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.736654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.736672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.736679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.736686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.736702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.746637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.746699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.746719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.746730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.746737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.746754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.756703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.756806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.756826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.497 [2024-10-01 15:56:13.756834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.497 [2024-10-01 15:56:13.756843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.497 [2024-10-01 15:56:13.756860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.497 qpair failed and we were unable to recover it. 00:38:34.497 [2024-10-01 15:56:13.766708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.497 [2024-10-01 15:56:13.766791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.497 [2024-10-01 15:56:13.766811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.766819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.766826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.766843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.776743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.776859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.776884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.776897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.776905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.776923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.786740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.786806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.786824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.786831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.786840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.786857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.796770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.796833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.796851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.796858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.796865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.796882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.806871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.806955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.806974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.806981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.806987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.807004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.816831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.816903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.816922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.816929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.816935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.816958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.826858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.826920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.826939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.826946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.826952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.826969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.836921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.836989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.837009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.837019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.837027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.837046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.846991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.847070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.847089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.847096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.847103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.847118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.856936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.857043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.857060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.857067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.857074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.857090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.867008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.867106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.867128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.867135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.867142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.867158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.877014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.877080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.877098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.877105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.877111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.877128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.887103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.887179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.887197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.887204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.887210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.887226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.897072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.498 [2024-10-01 15:56:13.897134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.498 [2024-10-01 15:56:13.897152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.498 [2024-10-01 15:56:13.897158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.498 [2024-10-01 15:56:13.897165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.498 [2024-10-01 15:56:13.897181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.498 qpair failed and we were unable to recover it. 00:38:34.498 [2024-10-01 15:56:13.907140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.499 [2024-10-01 15:56:13.907214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.499 [2024-10-01 15:56:13.907232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.499 [2024-10-01 15:56:13.907240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.499 [2024-10-01 15:56:13.907252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.499 [2024-10-01 15:56:13.907268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.499 qpair failed and we were unable to recover it. 00:38:34.499 [2024-10-01 15:56:13.917178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.499 [2024-10-01 15:56:13.917245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.499 [2024-10-01 15:56:13.917264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.499 [2024-10-01 15:56:13.917271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.499 [2024-10-01 15:56:13.917277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.499 [2024-10-01 15:56:13.917293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.499 qpair failed and we were unable to recover it. 00:38:34.499 [2024-10-01 15:56:13.927235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.499 [2024-10-01 15:56:13.927301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.499 [2024-10-01 15:56:13.927318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.499 [2024-10-01 15:56:13.927325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.499 [2024-10-01 15:56:13.927332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.499 [2024-10-01 15:56:13.927349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.499 qpair failed and we were unable to recover it. 00:38:34.499 [2024-10-01 15:56:13.937213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.499 [2024-10-01 15:56:13.937277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.499 [2024-10-01 15:56:13.937296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.499 [2024-10-01 15:56:13.937303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.499 [2024-10-01 15:56:13.937309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.499 [2024-10-01 15:56:13.937325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.499 qpair failed and we were unable to recover it. 00:38:34.499 [2024-10-01 15:56:13.947232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.499 [2024-10-01 15:56:13.947295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.499 [2024-10-01 15:56:13.947313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.499 [2024-10-01 15:56:13.947320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.499 [2024-10-01 15:56:13.947327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.499 [2024-10-01 15:56:13.947343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.499 qpair failed and we were unable to recover it. 00:38:34.762 [2024-10-01 15:56:13.957311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.762 [2024-10-01 15:56:13.957435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.762 [2024-10-01 15:56:13.957455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.762 [2024-10-01 15:56:13.957462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.762 [2024-10-01 15:56:13.957469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.762 [2024-10-01 15:56:13.957486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.762 qpair failed and we were unable to recover it. 00:38:34.762 [2024-10-01 15:56:13.967361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.762 [2024-10-01 15:56:13.967435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.762 [2024-10-01 15:56:13.967453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.762 [2024-10-01 15:56:13.967460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.762 [2024-10-01 15:56:13.967466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.762 [2024-10-01 15:56:13.967483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.762 qpair failed and we were unable to recover it. 00:38:34.762 [2024-10-01 15:56:13.977339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.762 [2024-10-01 15:56:13.977403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.762 [2024-10-01 15:56:13.977424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.762 [2024-10-01 15:56:13.977431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:13.977437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:13.977455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:13.987245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:13.987308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:13.987327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:13.987335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:13.987341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:13.987358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:13.997410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:13.997478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:13.997497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:13.997504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:13.997516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:13.997532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.007444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.007509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.007528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.007535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.007541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.007558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.017423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.017484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.017503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.017510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.017516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.017533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.027537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.027632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.027651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.027658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.027664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.027681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.037539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.037610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.037648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.037657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.037664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.037688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.047488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.047571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.047608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.047617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.047624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.047649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.057588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.057657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.057695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.057705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.057713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.057738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.067605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.067675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.067695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.067702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.067709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.067728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.077539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.077627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.077647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.077654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.077660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.077679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.087710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.087786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.087804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.763 [2024-10-01 15:56:14.087818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.763 [2024-10-01 15:56:14.087826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.763 [2024-10-01 15:56:14.087843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.763 qpair failed and we were unable to recover it. 00:38:34.763 [2024-10-01 15:56:14.097695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.763 [2024-10-01 15:56:14.097753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.763 [2024-10-01 15:56:14.097772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.097780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.097786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.097803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.107730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.107797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.107816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.107823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.107829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.107846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.117766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.117831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.117849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.117856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.117862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.117879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.127852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.127919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.127938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.127946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.127952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.127969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.137689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.137750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.137769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.137776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.137783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.137799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.147732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.147795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.147813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.147820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.147826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.147844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.157886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.157958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.157977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.157985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.157992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.158009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.167956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.168029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.168047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.168054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.168061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.168077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.177815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.177878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.177903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.177917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.177923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.177940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.187970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.188033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.188051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.188058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.188064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.188081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.197994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.198057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.198074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.198081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.198088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.198103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:34.764 [2024-10-01 15:56:14.208132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.764 [2024-10-01 15:56:14.208238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.764 [2024-10-01 15:56:14.208256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.764 [2024-10-01 15:56:14.208263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.764 [2024-10-01 15:56:14.208269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:34.764 [2024-10-01 15:56:14.208286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:34.764 qpair failed and we were unable to recover it. 00:38:35.027 [2024-10-01 15:56:14.218050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.027 [2024-10-01 15:56:14.218134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.027 [2024-10-01 15:56:14.218151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.027 [2024-10-01 15:56:14.218157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.027 [2024-10-01 15:56:14.218164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.027 [2024-10-01 15:56:14.218180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.027 qpair failed and we were unable to recover it. 00:38:35.027 [2024-10-01 15:56:14.228076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.027 [2024-10-01 15:56:14.228135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.027 [2024-10-01 15:56:14.228150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.027 [2024-10-01 15:56:14.228157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.027 [2024-10-01 15:56:14.228163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.027 [2024-10-01 15:56:14.228179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.027 qpair failed and we were unable to recover it. 00:38:35.027 [2024-10-01 15:56:14.238089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.027 [2024-10-01 15:56:14.238146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.027 [2024-10-01 15:56:14.238161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.027 [2024-10-01 15:56:14.238167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.027 [2024-10-01 15:56:14.238174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.027 [2024-10-01 15:56:14.238188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.027 qpair failed and we were unable to recover it. 00:38:35.027 [2024-10-01 15:56:14.248126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.027 [2024-10-01 15:56:14.248179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.027 [2024-10-01 15:56:14.248194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.027 [2024-10-01 15:56:14.248201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.027 [2024-10-01 15:56:14.248207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.027 [2024-10-01 15:56:14.248221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.027 qpair failed and we were unable to recover it. 00:38:35.027 [2024-10-01 15:56:14.258137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.027 [2024-10-01 15:56:14.258238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.027 [2024-10-01 15:56:14.258253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.027 [2024-10-01 15:56:14.258259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.027 [2024-10-01 15:56:14.258265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.027 [2024-10-01 15:56:14.258279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.027 qpair failed and we were unable to recover it. 00:38:35.027 [2024-10-01 15:56:14.268177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.027 [2024-10-01 15:56:14.268235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.027 [2024-10-01 15:56:14.268255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.027 [2024-10-01 15:56:14.268262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.027 [2024-10-01 15:56:14.268268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.027 [2024-10-01 15:56:14.268286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.027 qpair failed and we were unable to recover it. 00:38:35.027 [2024-10-01 15:56:14.278240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.027 [2024-10-01 15:56:14.278309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.027 [2024-10-01 15:56:14.278325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.027 [2024-10-01 15:56:14.278331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.027 [2024-10-01 15:56:14.278338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.278352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.288219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.288271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.288285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.288292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.288299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.288313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.298095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.298147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.298161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.298168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.298174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.298188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.308230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.308294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.308308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.308315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.308321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.308339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.318112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.318158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.318171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.318178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.318184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.318198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.328298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.328346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.328359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.328366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.328372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.328386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.338156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.338202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.338215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.338222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.338228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.338242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.348356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.348406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.348420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.348426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.348432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.348446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.358313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.358361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.358382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.358389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.358395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.358409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.368410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.368460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.368473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.368480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.368486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.368500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.378254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.378299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.378312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.378319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.378326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.378339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.388344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.388395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.388409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.388415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.028 [2024-10-01 15:56:14.388421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.028 [2024-10-01 15:56:14.388435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.028 qpair failed and we were unable to recover it. 00:38:35.028 [2024-10-01 15:56:14.398439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.028 [2024-10-01 15:56:14.398486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.028 [2024-10-01 15:56:14.398499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.028 [2024-10-01 15:56:14.398506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.398512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.398529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.408397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.408444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.408457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.408464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.408470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.408484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.418486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.418530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.418543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.418550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.418556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.418570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.428382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.428422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.428436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.428443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.428449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.428463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.438414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.438459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.438472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.438479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.438485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.438499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.448570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.448617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.448630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.448637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.448643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.448656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.458562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.458608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.458621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.458628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.458635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.458648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.468607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.468651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.468664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.468671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.468678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.468691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.029 [2024-10-01 15:56:14.478676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.029 [2024-10-01 15:56:14.478720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.029 [2024-10-01 15:56:14.478734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.029 [2024-10-01 15:56:14.478741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.029 [2024-10-01 15:56:14.478747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.029 [2024-10-01 15:56:14.478761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.029 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.488685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.488731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.488745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.488751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.488761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.488775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.498675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.498717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.498730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.498737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.498743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.498757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.508720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.508767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.508781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.508787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.508794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.508807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.518759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.518804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.518817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.518824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.518830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.518844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.528800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.528852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.528865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.528872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.528878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.528892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.538681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.538732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.538746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.538752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.538758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.538772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.548818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.548863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.548877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.548884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.548890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.548908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.558874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.558925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.558938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.558945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.558951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.558965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-10-01 15:56:14.568917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.293 [2024-10-01 15:56:14.568968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.293 [2024-10-01 15:56:14.568981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.293 [2024-10-01 15:56:14.568987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.293 [2024-10-01 15:56:14.568994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.293 [2024-10-01 15:56:14.569007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.578782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.578823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.578836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.578847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.578853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.578867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.588811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.588907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.588920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.588927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.588933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.588947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.598975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.599034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.599047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.599054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.599060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.599074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.609007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.609059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.609072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.609079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.609085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.609099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.619015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.619058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.619071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.619078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.619084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.619098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.629056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.629102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.629115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.629122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.629128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.629142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.639092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.639137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.639150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.639156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.639163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.639177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.649131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.649212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.649227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.649233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.649240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.649258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.659140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.659179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.659193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.659200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.659206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.659220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.669137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.669181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.669194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.669204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.669210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.669225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.679181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.679241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.679254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.679261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.679267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.294 [2024-10-01 15:56:14.679280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-10-01 15:56:14.689138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.294 [2024-10-01 15:56:14.689204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.294 [2024-10-01 15:56:14.689217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.294 [2024-10-01 15:56:14.689224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.294 [2024-10-01 15:56:14.689230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.295 [2024-10-01 15:56:14.689244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-10-01 15:56:14.699224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.295 [2024-10-01 15:56:14.699267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.295 [2024-10-01 15:56:14.699280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.295 [2024-10-01 15:56:14.699287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.295 [2024-10-01 15:56:14.699293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.295 [2024-10-01 15:56:14.699307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-10-01 15:56:14.709273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.295 [2024-10-01 15:56:14.709318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.295 [2024-10-01 15:56:14.709331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.295 [2024-10-01 15:56:14.709338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.295 [2024-10-01 15:56:14.709344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.295 [2024-10-01 15:56:14.709358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-10-01 15:56:14.719295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.295 [2024-10-01 15:56:14.719346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.295 [2024-10-01 15:56:14.719360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.295 [2024-10-01 15:56:14.719366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.295 [2024-10-01 15:56:14.719372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.295 [2024-10-01 15:56:14.719386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-10-01 15:56:14.729208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.295 [2024-10-01 15:56:14.729254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.295 [2024-10-01 15:56:14.729268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.295 [2024-10-01 15:56:14.729274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.295 [2024-10-01 15:56:14.729281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.295 [2024-10-01 15:56:14.729294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-10-01 15:56:14.739351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.295 [2024-10-01 15:56:14.739393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.295 [2024-10-01 15:56:14.739406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.295 [2024-10-01 15:56:14.739413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.295 [2024-10-01 15:56:14.739419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.295 [2024-10-01 15:56:14.739433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.749249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.749291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.749304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.749311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.749317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.749331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.759405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.759457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.759480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.759488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.759495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.759512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.769350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.769412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.769427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.769434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.769442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.769460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.779504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.779558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.779573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.779579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.779585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.779600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.789490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.789534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.789547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.789553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.789560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.789574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.799509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.799564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.799577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.799584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.799590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.799608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.809572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.809623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.809636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.809643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.809649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.809663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.819565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.819615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.819629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.819635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.819641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.819655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.829600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.829647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.558 [2024-10-01 15:56:14.829661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.558 [2024-10-01 15:56:14.829667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.558 [2024-10-01 15:56:14.829674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.558 [2024-10-01 15:56:14.829688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.558 qpair failed and we were unable to recover it. 00:38:35.558 [2024-10-01 15:56:14.839582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.558 [2024-10-01 15:56:14.839633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.839647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.839654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.839660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.839674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.849653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.849700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.849717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.849724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.849730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.849745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.859722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.859803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.859817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.859824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.859830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.859844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.869699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.869742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.869756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.869762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.869769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.869783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.879737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.879781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.879795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.879801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.879807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.879821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.889644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.889693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.889706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.889713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.889719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.889736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.899777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.899818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.899832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.899839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.899845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.899858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.909869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.909931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.909944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.909952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.909958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.909972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.919875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.919927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.919940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.919947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.919954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.919967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.929913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.930007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.930020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.930027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.930033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.930047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.939916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.939991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.940008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.940015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.940021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.940035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.949779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.949824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.949836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.949843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.949849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.949863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.959929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.959976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.959990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.959996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.960003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.559 [2024-10-01 15:56:14.960016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.559 qpair failed and we were unable to recover it. 00:38:35.559 [2024-10-01 15:56:14.970013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.559 [2024-10-01 15:56:14.970063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.559 [2024-10-01 15:56:14.970076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.559 [2024-10-01 15:56:14.970083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.559 [2024-10-01 15:56:14.970089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.560 [2024-10-01 15:56:14.970103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.560 qpair failed and we were unable to recover it. 00:38:35.560 [2024-10-01 15:56:14.979993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.560 [2024-10-01 15:56:14.980040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.560 [2024-10-01 15:56:14.980053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.560 [2024-10-01 15:56:14.980060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.560 [2024-10-01 15:56:14.980070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.560 [2024-10-01 15:56:14.980084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.560 qpair failed and we were unable to recover it. 00:38:35.560 [2024-10-01 15:56:14.990026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.560 [2024-10-01 15:56:14.990067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.560 [2024-10-01 15:56:14.990080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.560 [2024-10-01 15:56:14.990086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.560 [2024-10-01 15:56:14.990093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.560 [2024-10-01 15:56:14.990107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.560 qpair failed and we were unable to recover it. 00:38:35.560 [2024-10-01 15:56:15.000082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.560 [2024-10-01 15:56:15.000128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.560 [2024-10-01 15:56:15.000141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.560 [2024-10-01 15:56:15.000148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.560 [2024-10-01 15:56:15.000154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.560 [2024-10-01 15:56:15.000168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.560 qpair failed and we were unable to recover it. 00:38:35.560 [2024-10-01 15:56:15.010080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.560 [2024-10-01 15:56:15.010130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.560 [2024-10-01 15:56:15.010148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.560 [2024-10-01 15:56:15.010155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.560 [2024-10-01 15:56:15.010161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.560 [2024-10-01 15:56:15.010178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.560 qpair failed and we were unable to recover it. 00:38:35.822 [2024-10-01 15:56:15.020098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.822 [2024-10-01 15:56:15.020143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.020157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.020164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.020170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.020185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.030003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.030050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.030064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.030071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.030077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.030092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.040165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.040210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.040224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.040230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.040237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.040250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.050200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.050246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.050260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.050266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.050273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.050286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.060224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.060269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.060282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.060289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.060295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.060309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.070241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.070293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.070307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.070313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.070324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.070338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.080271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.080315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.080329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.080336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.080342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.080356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.090172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.090218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.090231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.090238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.090244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.090258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.100313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.100354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.100367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.100373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.100380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.100393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.110362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.110407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.110420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.110427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.110433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.110447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.120365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.120407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.120421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.120428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.120434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.120447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.130383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.130431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.823 [2024-10-01 15:56:15.130444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.823 [2024-10-01 15:56:15.130450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.823 [2024-10-01 15:56:15.130457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.823 [2024-10-01 15:56:15.130470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.823 qpair failed and we were unable to recover it. 00:38:35.823 [2024-10-01 15:56:15.140304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.823 [2024-10-01 15:56:15.140348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.140361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.140368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.140375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.140388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.150463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.150510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.150523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.150530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.150536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.150550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.160487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.160532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.160546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.160561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.160567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.160582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.170541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.170611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.170625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.170632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.170638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.170652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.180526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.180574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.180587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.180594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.180600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.180614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.190441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.190485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.190498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.190504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.190511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.190525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.200602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.200647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.200661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.200667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.200674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.200687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.210628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.210680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.210694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.210700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.210707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.210720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.220680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.220725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.220738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.220745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.220751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.220765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.230551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.230594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.230607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.230614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.230620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.230634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.240718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.240766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.240780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.240786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.824 [2024-10-01 15:56:15.240793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.824 [2024-10-01 15:56:15.240806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.824 qpair failed and we were unable to recover it. 00:38:35.824 [2024-10-01 15:56:15.250749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.824 [2024-10-01 15:56:15.250794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.824 [2024-10-01 15:56:15.250810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.824 [2024-10-01 15:56:15.250817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.825 [2024-10-01 15:56:15.250823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.825 [2024-10-01 15:56:15.250837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.825 qpair failed and we were unable to recover it. 00:38:35.825 [2024-10-01 15:56:15.260761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.825 [2024-10-01 15:56:15.260844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.825 [2024-10-01 15:56:15.260862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.825 [2024-10-01 15:56:15.260869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.825 [2024-10-01 15:56:15.260875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.825 [2024-10-01 15:56:15.260891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.825 qpair failed and we were unable to recover it. 00:38:35.825 [2024-10-01 15:56:15.270782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.825 [2024-10-01 15:56:15.270831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.825 [2024-10-01 15:56:15.270845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.825 [2024-10-01 15:56:15.270852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.825 [2024-10-01 15:56:15.270858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:35.825 [2024-10-01 15:56:15.270872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:35.825 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.280809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.280905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.280920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.280927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.280933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.088 [2024-10-01 15:56:15.280948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.088 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.290849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.290896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.290910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.290917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.290924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.088 [2024-10-01 15:56:15.290938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.088 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.300840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.300885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.300901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.300908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.300915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.088 [2024-10-01 15:56:15.300929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.088 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.310891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.310934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.310948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.310954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.310960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.088 [2024-10-01 15:56:15.310974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.088 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.320872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.320917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.320930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.320937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.320943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.088 [2024-10-01 15:56:15.320957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.088 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.330961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.331013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.331027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.331033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.331040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.088 [2024-10-01 15:56:15.331054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.088 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.340962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.341016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.341033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.341040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.341046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.088 [2024-10-01 15:56:15.341060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.088 qpair failed and we were unable to recover it. 00:38:36.088 [2024-10-01 15:56:15.350956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.088 [2024-10-01 15:56:15.350999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.088 [2024-10-01 15:56:15.351012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.088 [2024-10-01 15:56:15.351019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.088 [2024-10-01 15:56:15.351025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.351039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.361028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.361076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.361089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.361096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.361102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.361116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.370936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.370998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.371011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.371017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.371024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.371038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.381080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.381126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.381139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.381146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.381153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.381170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.391117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.391162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.391175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.391182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.391188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.391202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.401140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.401212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.401225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.401231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.401238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.401251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.411186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.411235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.411248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.411255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.411261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.411275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.421180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.421225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.421239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.421246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.421252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.421265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.431076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.431123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.431139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.431146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.431152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.431166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.441250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.441294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.441307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.441314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.441321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.441334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.451246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.451292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.451305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.451312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.451318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.451332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.461302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.461349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.461362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.461368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.461375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.461388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.471291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.471334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.471347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.471353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.471363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.471377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.481236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.481287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.089 [2024-10-01 15:56:15.481303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.089 [2024-10-01 15:56:15.481310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.089 [2024-10-01 15:56:15.481316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.089 [2024-10-01 15:56:15.481331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.089 qpair failed and we were unable to recover it. 00:38:36.089 [2024-10-01 15:56:15.491382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.089 [2024-10-01 15:56:15.491428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.090 [2024-10-01 15:56:15.491443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.090 [2024-10-01 15:56:15.491450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.090 [2024-10-01 15:56:15.491456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.090 [2024-10-01 15:56:15.491470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.090 qpair failed and we were unable to recover it. 00:38:36.090 [2024-10-01 15:56:15.501389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.090 [2024-10-01 15:56:15.501438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.090 [2024-10-01 15:56:15.501451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.090 [2024-10-01 15:56:15.501458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.090 [2024-10-01 15:56:15.501464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.090 [2024-10-01 15:56:15.501478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.090 qpair failed and we were unable to recover it. 00:38:36.090 [2024-10-01 15:56:15.511404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.090 [2024-10-01 15:56:15.511499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.090 [2024-10-01 15:56:15.511516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.090 [2024-10-01 15:56:15.511524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.090 [2024-10-01 15:56:15.511531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.090 [2024-10-01 15:56:15.511547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.090 qpair failed and we were unable to recover it. 00:38:36.090 [2024-10-01 15:56:15.521329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.090 [2024-10-01 15:56:15.521386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.090 [2024-10-01 15:56:15.521400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.090 [2024-10-01 15:56:15.521407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.090 [2024-10-01 15:56:15.521413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.090 [2024-10-01 15:56:15.521427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.090 qpair failed and we were unable to recover it. 00:38:36.090 [2024-10-01 15:56:15.531490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.090 [2024-10-01 15:56:15.531539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.090 [2024-10-01 15:56:15.531552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.090 [2024-10-01 15:56:15.531558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.090 [2024-10-01 15:56:15.531565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.090 [2024-10-01 15:56:15.531578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.090 qpair failed and we were unable to recover it. 00:38:36.353 [2024-10-01 15:56:15.541508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.353 [2024-10-01 15:56:15.541600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.353 [2024-10-01 15:56:15.541613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.353 [2024-10-01 15:56:15.541620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.353 [2024-10-01 15:56:15.541627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.353 [2024-10-01 15:56:15.541640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.353 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.551550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.551593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.551607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.551614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.551620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.551634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.561569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.561653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.561666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.561673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.561683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.561697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.571568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.571624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.571637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.571644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.571651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.571664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.581486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.581530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.581544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.581550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.581557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.581571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.591544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.591593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.591606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.591613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.591619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.591633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.601550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.601598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.601612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.601619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.601625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.601638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.611676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.611726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.611739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.611746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.611753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.611767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.621707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.621749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.621762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.621769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.621775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.621789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.631747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.631799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.631812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.631819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.631825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.631839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.641795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.641841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.641855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.641862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.641868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.641881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.651820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.651864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.651878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.354 [2024-10-01 15:56:15.651889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.354 [2024-10-01 15:56:15.651898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.354 [2024-10-01 15:56:15.651912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.354 qpair failed and we were unable to recover it. 00:38:36.354 [2024-10-01 15:56:15.661835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.354 [2024-10-01 15:56:15.661880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.354 [2024-10-01 15:56:15.661897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.661904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.661910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.661925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.671871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.671961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.671975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.671982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.671988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.672003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.681933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.682032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.682058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.682068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.682074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.682098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.691845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.691896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.691910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.691917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.691923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.691938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.701813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.701874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.701887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.701897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.701904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.701918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.711966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.712015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.712029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.712035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.712041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.712055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.722001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.722045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.722059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.722065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.722072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.722086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.732044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.732092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.732105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.732112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.732118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.732132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.742036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.742078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.742092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.742102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.742108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.742122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.752052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.752091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.752105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.752111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.752117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.752131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.762115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.762184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.762201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.762208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.762215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.762230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.772161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.772207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.772221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.772228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.772234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.772248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.782188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.782237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.782251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.782258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.782264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.355 [2024-10-01 15:56:15.782278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.355 qpair failed and we were unable to recover it. 00:38:36.355 [2024-10-01 15:56:15.792195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.355 [2024-10-01 15:56:15.792239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.355 [2024-10-01 15:56:15.792253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.355 [2024-10-01 15:56:15.792259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.355 [2024-10-01 15:56:15.792266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.356 [2024-10-01 15:56:15.792280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.356 qpair failed and we were unable to recover it. 00:38:36.356 [2024-10-01 15:56:15.802232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.356 [2024-10-01 15:56:15.802275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.356 [2024-10-01 15:56:15.802289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.356 [2024-10-01 15:56:15.802296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.356 [2024-10-01 15:56:15.802302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.356 [2024-10-01 15:56:15.802316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.356 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.812250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.812300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.812313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.812319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.812326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.812340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.822262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.822318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.822331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.822338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.822344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.822358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.832157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.832201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.832218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.832224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.832231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.832245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.842192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.842238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.842252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.842258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.842265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.842278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.852352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.852397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.852409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.852416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.852422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.852436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.862368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.862410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.862424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.862431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.862437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.862450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.872395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.872434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.872448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.872455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.872461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.872482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.882388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.882436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.882450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.882456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.882463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.882476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.892467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.892515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.892528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.892535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.892541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.892556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.618 [2024-10-01 15:56:15.902343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.618 [2024-10-01 15:56:15.902390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.618 [2024-10-01 15:56:15.902403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.618 [2024-10-01 15:56:15.902410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.618 [2024-10-01 15:56:15.902416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.618 [2024-10-01 15:56:15.902430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.618 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.912494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.912553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.912567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.912574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.912580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.912596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.922422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.922468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.922486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.922494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.922500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.922515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.932560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.932642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.932656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.932663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.932669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.932684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.942545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.942590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.942604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.942611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.942617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.942631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.952613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.952691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.952704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.952712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.952718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.952732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.962508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.962553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.962566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.962573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.962579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.962596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.972711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.972758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.972776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.972783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.972789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.972804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.982551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.982598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.982612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.982619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.982626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.982639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:15.992723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:15.992769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:15.992782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:15.992789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:15.992795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:15.992809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:16.002737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:16.002783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:16.002797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:16.002804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:16.002810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:16.002824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:16.012782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:16.012847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:16.012861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:16.012868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:16.012874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:16.012888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:16.022813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:16.022869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:16.022883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:16.022889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:16.022900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:16.022914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:16.032835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:16.032878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:16.032891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:16.032903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.619 [2024-10-01 15:56:16.032909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.619 [2024-10-01 15:56:16.032923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.619 qpair failed and we were unable to recover it. 00:38:36.619 [2024-10-01 15:56:16.042865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.619 [2024-10-01 15:56:16.042942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.619 [2024-10-01 15:56:16.042956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.619 [2024-10-01 15:56:16.042963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.620 [2024-10-01 15:56:16.042969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.620 [2024-10-01 15:56:16.042983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.620 qpair failed and we were unable to recover it. 00:38:36.620 [2024-10-01 15:56:16.052762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.620 [2024-10-01 15:56:16.052810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.620 [2024-10-01 15:56:16.052824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.620 [2024-10-01 15:56:16.052830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.620 [2024-10-01 15:56:16.052840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.620 [2024-10-01 15:56:16.052854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.620 qpair failed and we were unable to recover it. 00:38:36.620 [2024-10-01 15:56:16.062912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.620 [2024-10-01 15:56:16.062957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.620 [2024-10-01 15:56:16.062971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.620 [2024-10-01 15:56:16.062977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.620 [2024-10-01 15:56:16.062984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.620 [2024-10-01 15:56:16.062998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.620 qpair failed and we were unable to recover it. 00:38:36.882 [2024-10-01 15:56:16.072944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.882 [2024-10-01 15:56:16.072988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.882 [2024-10-01 15:56:16.073001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.882 [2024-10-01 15:56:16.073009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.882 [2024-10-01 15:56:16.073015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.882 [2024-10-01 15:56:16.073029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.882 qpair failed and we were unable to recover it. 00:38:36.882 [2024-10-01 15:56:16.082959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.882 [2024-10-01 15:56:16.083003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.882 [2024-10-01 15:56:16.083017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.882 [2024-10-01 15:56:16.083024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.882 [2024-10-01 15:56:16.083030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.882 [2024-10-01 15:56:16.083044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.882 qpair failed and we were unable to recover it. 00:38:36.882 [2024-10-01 15:56:16.092869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.882 [2024-10-01 15:56:16.092922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.882 [2024-10-01 15:56:16.092937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.882 [2024-10-01 15:56:16.092943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.882 [2024-10-01 15:56:16.092950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.882 [2024-10-01 15:56:16.092967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.882 qpair failed and we were unable to recover it. 00:38:36.882 [2024-10-01 15:56:16.103035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.882 [2024-10-01 15:56:16.103130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.103144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.103151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.103157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.103171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.113057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.113101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.113115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.113121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.113128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.113142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.123060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.123104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.123118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.123125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.123131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.123145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.133020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.133080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.133093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.133100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.133106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.133120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.143110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.143155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.143168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.143178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.143184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.143198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.153123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.153178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.153191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.153198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.153204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.153218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.163183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.163229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.163242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.163249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.163255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.163268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.173221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.173269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.173282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.173288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.173295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.173308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.183223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.183290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.183303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.183309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.183315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.183329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.193269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.193313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.193327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.193334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.193340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.193354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.203289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.203332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.203346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.203352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.203358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.203372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.213336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.213386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.213399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.213406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.213412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.213426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.223341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.223386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.223399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.223406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.223412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.223426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.883 [2024-10-01 15:56:16.233375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.883 [2024-10-01 15:56:16.233421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.883 [2024-10-01 15:56:16.233434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.883 [2024-10-01 15:56:16.233444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.883 [2024-10-01 15:56:16.233451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.883 [2024-10-01 15:56:16.233465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.883 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.243391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.243442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.243455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.243462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.243468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.243481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.253434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.253482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.253496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.253502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.253509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.253522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.263461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.263508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.263521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.263528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.263534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.263548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.273346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.273388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.273403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.273409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.273416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.273430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.283509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.283552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.283566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.283573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.283579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.283593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.293516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.293561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.293575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.293582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.293588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.293602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.303416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.303461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.303475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.303481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.303488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.303501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.313555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.313597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.313610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.313617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.313623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.313637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.323580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.323630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.323650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.323657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.323663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.323678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:36.884 [2024-10-01 15:56:16.333640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.884 [2024-10-01 15:56:16.333692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.884 [2024-10-01 15:56:16.333716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.884 [2024-10-01 15:56:16.333725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.884 [2024-10-01 15:56:16.333731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:36.884 [2024-10-01 15:56:16.333751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.884 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.343649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.343700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.343725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.343733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.343740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.343760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.353562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.353611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.353627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.353634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.353641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.353656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.363611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.363658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.363672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.363679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.363685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.363705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.373771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.373817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.373831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.373838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.373844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.373858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.383769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.383818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.383832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.383839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.383845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.383859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.393772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.393817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.393830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.393837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.393843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.393857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.403831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.403877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.403891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.403903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.403909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.403923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.413828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.413871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.413888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.413899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.413905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.413919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.423920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.148 [2024-10-01 15:56:16.423973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.148 [2024-10-01 15:56:16.423987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.148 [2024-10-01 15:56:16.423994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.148 [2024-10-01 15:56:16.424000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.148 [2024-10-01 15:56:16.424014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.148 qpair failed and we were unable to recover it. 00:38:37.148 [2024-10-01 15:56:16.433906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.433948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.433961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.433968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.433974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.433988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.443929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.443975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.443988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.443995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.444001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.444015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.453970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.454017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.454031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.454038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.454044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.454065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.463855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.463899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.463913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.463920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.463926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.463940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.474030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.474077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.474090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.474097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.474103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.474119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.484052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.484097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.484112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.484119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.484125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.484139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.494074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.494116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.494129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.494136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.494143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.494156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.504092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.504137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.504154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.504160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.504167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.504181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.514129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.514168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.514182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.514188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.514195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.514208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.524141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.524188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.524201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.524208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.524214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.524227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.534186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.534235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.534248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.534255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.534261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.534274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.544214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.544256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.544269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.544276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.544286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.544300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.554224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.554265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.554278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.554284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.149 [2024-10-01 15:56:16.554291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.149 [2024-10-01 15:56:16.554304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.149 qpair failed and we were unable to recover it. 00:38:37.149 [2024-10-01 15:56:16.564242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.149 [2024-10-01 15:56:16.564287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.149 [2024-10-01 15:56:16.564300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.149 [2024-10-01 15:56:16.564307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.150 [2024-10-01 15:56:16.564313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.150 [2024-10-01 15:56:16.564327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.150 qpair failed and we were unable to recover it. 00:38:37.150 [2024-10-01 15:56:16.574301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.150 [2024-10-01 15:56:16.574347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.150 [2024-10-01 15:56:16.574360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.150 [2024-10-01 15:56:16.574367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.150 [2024-10-01 15:56:16.574373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.150 [2024-10-01 15:56:16.574387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.150 qpair failed and we were unable to recover it. 00:38:37.150 [2024-10-01 15:56:16.584305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.150 [2024-10-01 15:56:16.584362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.150 [2024-10-01 15:56:16.584376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.150 [2024-10-01 15:56:16.584382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.150 [2024-10-01 15:56:16.584389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.150 [2024-10-01 15:56:16.584403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.150 qpair failed and we were unable to recover it. 00:38:37.150 [2024-10-01 15:56:16.594239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.150 [2024-10-01 15:56:16.594302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.150 [2024-10-01 15:56:16.594315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.150 [2024-10-01 15:56:16.594322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.150 [2024-10-01 15:56:16.594328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.150 [2024-10-01 15:56:16.594342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.150 qpair failed and we were unable to recover it. 00:38:37.412 [2024-10-01 15:56:16.604347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.412 [2024-10-01 15:56:16.604437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.412 [2024-10-01 15:56:16.604451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.412 [2024-10-01 15:56:16.604458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.412 [2024-10-01 15:56:16.604464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.412 [2024-10-01 15:56:16.604480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.412 qpair failed and we were unable to recover it. 00:38:37.412 [2024-10-01 15:56:16.614303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.412 [2024-10-01 15:56:16.614349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.412 [2024-10-01 15:56:16.614364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.412 [2024-10-01 15:56:16.614371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.412 [2024-10-01 15:56:16.614377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.412 [2024-10-01 15:56:16.614391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.412 qpair failed and we were unable to recover it. 00:38:37.412 [2024-10-01 15:56:16.624393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.412 [2024-10-01 15:56:16.624438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.412 [2024-10-01 15:56:16.624453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.412 [2024-10-01 15:56:16.624460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.412 [2024-10-01 15:56:16.624466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.412 [2024-10-01 15:56:16.624480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.412 qpair failed and we were unable to recover it. 00:38:37.412 [2024-10-01 15:56:16.634428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.412 [2024-10-01 15:56:16.634470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.412 [2024-10-01 15:56:16.634483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.412 [2024-10-01 15:56:16.634489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.412 [2024-10-01 15:56:16.634499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.412 [2024-10-01 15:56:16.634513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.644327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.644370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.644383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.644390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.644396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.644410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.654507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.654558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.654571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.654578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.654584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.654598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.664519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.664567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.664580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.664587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.664593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.664606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.674536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.674579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.674592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.674599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.674605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.674619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.684568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.684623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.684637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.684644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.684650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.684664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.694473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.694521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.694534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.694541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.694548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.694562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.704617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.704680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.704694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.704701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.704707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.704721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.714640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.714687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.714700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.714707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.714713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.714727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.724579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.724626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.724640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.724650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.724656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.724671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.734573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.734622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.734635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.734642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.734648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.734662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.744714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.744764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.744777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.744783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.744790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.744804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.754750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.754793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.754806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.754813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.754819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.754833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.764775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.764821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.764840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.413 [2024-10-01 15:56:16.764848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.413 [2024-10-01 15:56:16.764854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.413 [2024-10-01 15:56:16.764870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.413 qpair failed and we were unable to recover it. 00:38:37.413 [2024-10-01 15:56:16.774689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.413 [2024-10-01 15:56:16.774738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.413 [2024-10-01 15:56:16.774753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.774760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.774767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.774782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.784828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.784869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.784884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.784891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.784901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.784916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.794853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.794902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.794917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.794923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.794930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.794944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.804866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.804916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.804930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.804937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.804943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.804957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.814922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.814971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.814988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.814994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.815001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.815015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.824947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.824991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.825005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.825012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.825018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.825032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.834848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.834888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.834906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.834913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.834919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.834938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.845024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.845106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.845120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.845126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.845132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.845147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.855028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.855081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.855094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.855101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.855107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.855121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.414 [2024-10-01 15:56:16.865049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.414 [2024-10-01 15:56:16.865092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.414 [2024-10-01 15:56:16.865105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.414 [2024-10-01 15:56:16.865112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.414 [2024-10-01 15:56:16.865118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.414 [2024-10-01 15:56:16.865132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.414 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.875078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.875151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.875165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.875172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.875178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.875192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.885109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.885152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.885166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.885173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.885179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.885193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.895137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.895182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.895195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.895202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.895208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.895221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.905017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.905062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.905078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.905085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.905091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.905105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.915176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.915216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.915229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.915235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.915241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.915255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.925220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.925294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.925307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.925314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.925320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.925333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.935260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.935307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.935319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.935326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.935332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.935346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.945228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.945271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.676 [2024-10-01 15:56:16.945284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.676 [2024-10-01 15:56:16.945291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.676 [2024-10-01 15:56:16.945297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.676 [2024-10-01 15:56:16.945314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.676 qpair failed and we were unable to recover it. 00:38:37.676 [2024-10-01 15:56:16.955297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.676 [2024-10-01 15:56:16.955341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:16.955355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:16.955362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:16.955368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:16.955382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:16.965183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:16.965231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:16.965245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:16.965251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:16.965257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:16.965271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:16.975352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:16.975399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:16.975412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:16.975419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:16.975425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:16.975439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:16.985380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:16.985423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:16.985436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:16.985443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:16.985449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:16.985463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:16.995404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:16.995446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:16.995462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:16.995468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:16.995475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:16.995489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.005402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.005447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.005460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.005467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.005473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.005487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.015473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.015518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.015531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.015538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.015544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.015558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.025480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.025518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.025532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.025538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.025545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.025558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.035490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.035563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.035577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.035584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.035593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.035607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.045599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.045645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.045658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.045665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.045671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.045685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.055573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.055624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.055638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.055645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.055651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.055666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.065573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.065620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.065633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.065640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.065646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.065660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.075615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.075666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.075679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.075686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.075692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.075706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.677 qpair failed and we were unable to recover it. 00:38:37.677 [2024-10-01 15:56:17.085657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.677 [2024-10-01 15:56:17.085742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.677 [2024-10-01 15:56:17.085768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.677 [2024-10-01 15:56:17.085776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.677 [2024-10-01 15:56:17.085783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.677 [2024-10-01 15:56:17.085802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.678 qpair failed and we were unable to recover it. 00:38:37.678 [2024-10-01 15:56:17.095668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.678 [2024-10-01 15:56:17.095714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.678 [2024-10-01 15:56:17.095730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.678 [2024-10-01 15:56:17.095737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.678 [2024-10-01 15:56:17.095743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.678 [2024-10-01 15:56:17.095758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.678 qpair failed and we were unable to recover it. 00:38:37.678 [2024-10-01 15:56:17.105678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.678 [2024-10-01 15:56:17.105725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.678 [2024-10-01 15:56:17.105739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.678 [2024-10-01 15:56:17.105746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.678 [2024-10-01 15:56:17.105752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.678 [2024-10-01 15:56:17.105766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.678 qpair failed and we were unable to recover it. 00:38:37.678 [2024-10-01 15:56:17.115612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.678 [2024-10-01 15:56:17.115673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.678 [2024-10-01 15:56:17.115686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.678 [2024-10-01 15:56:17.115693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.678 [2024-10-01 15:56:17.115699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.678 [2024-10-01 15:56:17.115713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.678 qpair failed and we were unable to recover it. 00:38:37.678 [2024-10-01 15:56:17.125733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.678 [2024-10-01 15:56:17.125779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.678 [2024-10-01 15:56:17.125793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.678 [2024-10-01 15:56:17.125800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.678 [2024-10-01 15:56:17.125810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.678 [2024-10-01 15:56:17.125824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.678 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.135790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.135835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.135848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.135855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.135861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.135875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.145751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.145793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.145807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.145814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.145820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.145834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.155820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.155864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.155877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.155884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.155890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.155908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.165813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.165858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.165871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.165878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.165884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.165903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.175873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.175927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.175941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.175947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.175954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.175968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.185864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.185910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.185924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.185931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.185937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.185951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.195924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.196006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.196020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.196027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.196033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.196049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.205828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.205872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.205886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.205896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.205903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.205923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.216002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.216046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.216060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.216070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.216076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.216090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.225993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.226038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.226050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.226057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.226063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.226077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.236055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.236099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.236112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.236119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.236125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.236139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.246076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.246172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.246185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.941 [2024-10-01 15:56:17.246192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.941 [2024-10-01 15:56:17.246198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.941 [2024-10-01 15:56:17.246211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.941 qpair failed and we were unable to recover it. 00:38:37.941 [2024-10-01 15:56:17.255971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.941 [2024-10-01 15:56:17.256017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.941 [2024-10-01 15:56:17.256031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.256038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.256044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.256064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.266113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.266165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.266179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.266185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.266192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.266205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.276172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.276265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.276278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.276285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.276291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.276305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.286181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.286229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.286244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.286251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.286257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.286271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.296195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.296250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.296263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.296270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.296276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.296290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.306223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.306266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.306279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.306289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.306296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.306309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.316231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.316279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.316292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.316299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.316305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.316319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.326243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.326286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.326299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.326306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.326312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.326325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.336312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.336357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.336370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.336377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.336383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.336397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.346294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.346335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.346348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.346354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.346360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.346374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.356360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.356404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.356417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.356424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.356430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.356444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.366390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.366439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.366452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.366459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.366465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.366479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.376412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.376464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.376477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.376484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.376490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.376504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:37.942 [2024-10-01 15:56:17.386424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.942 [2024-10-01 15:56:17.386464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.942 [2024-10-01 15:56:17.386478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.942 [2024-10-01 15:56:17.386484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.942 [2024-10-01 15:56:17.386490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:37.942 [2024-10-01 15:56:17.386504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.942 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.396457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.396496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.396517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.396524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.396530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.396544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.406368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.406415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.406428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.406435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.406442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.406456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.416528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.416585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.416598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.416605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.416611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.416625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.426547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.426599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.426624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.426632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.426639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.426658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.436547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.436603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.436628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.436636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.436643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.436667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.446608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.446653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.446669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.446676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.446683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.446698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.456658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.456711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.456725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.456732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.456738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.456753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.466662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.466703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.466718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.466725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.466731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.466746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.476700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.476748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.476762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.476768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.476775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.476789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.486708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.486761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.486778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.486785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.486791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.486806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.496765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.205 [2024-10-01 15:56:17.496812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.205 [2024-10-01 15:56:17.496826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.205 [2024-10-01 15:56:17.496833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.205 [2024-10-01 15:56:17.496839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.205 [2024-10-01 15:56:17.496853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.205 qpair failed and we were unable to recover it. 00:38:38.205 [2024-10-01 15:56:17.506775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.506820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.506834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.506841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.506849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.506863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.516780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.516838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.516851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.516858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.516864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.516879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.526815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.526861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.526874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.526881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.526887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.526909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.536855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.536907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.536921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.536928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.536934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.536949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.546884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.546931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.546944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.546951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.546957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.546971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.556898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.556944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.556957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.556964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.556970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.556984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.566932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.566977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.566990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.566997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.567003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.567017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.577011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.577080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.577094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.577101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.577107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.577121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.586982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.587025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.587039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.587046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.587052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.587066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.597003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.597043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.597056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.597063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.597069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.597083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.607024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.607099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.607113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.607120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.607126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.607139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.617074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.617119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.617132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.617138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.617148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.617162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.627070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.627125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.627138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.627145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.627151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.206 [2024-10-01 15:56:17.627165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.206 qpair failed and we were unable to recover it. 00:38:38.206 [2024-10-01 15:56:17.636976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.206 [2024-10-01 15:56:17.637025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.206 [2024-10-01 15:56:17.637038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.206 [2024-10-01 15:56:17.637045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.206 [2024-10-01 15:56:17.637051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.207 [2024-10-01 15:56:17.637065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.207 qpair failed and we were unable to recover it. 00:38:38.207 [2024-10-01 15:56:17.647019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.207 [2024-10-01 15:56:17.647062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.207 [2024-10-01 15:56:17.647076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.207 [2024-10-01 15:56:17.647083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.207 [2024-10-01 15:56:17.647089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.207 [2024-10-01 15:56:17.647104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.207 qpair failed and we were unable to recover it. 00:38:38.207 [2024-10-01 15:56:17.657041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.207 [2024-10-01 15:56:17.657089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.207 [2024-10-01 15:56:17.657102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.207 [2024-10-01 15:56:17.657109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.207 [2024-10-01 15:56:17.657115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.207 [2024-10-01 15:56:17.657130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.207 qpair failed and we were unable to recover it. 00:38:38.469 [2024-10-01 15:56:17.667192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.469 [2024-10-01 15:56:17.667235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.469 [2024-10-01 15:56:17.667249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.469 [2024-10-01 15:56:17.667256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.667262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.667276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.677210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.677257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.677270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.677277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.677283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.677298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.687215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.687265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.687280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.687288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.687294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.687313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.697153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.697205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.697219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.697226] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.697232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.697246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.707256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.707297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.707311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.707321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.707327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.707341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.717340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.717395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.717408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.717415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.717421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.717434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.727351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.727400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.727413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.727420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.727426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.727440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.737379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.737430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.737443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.737450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.737456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.737470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.747252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.747298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.747313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.747320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.747326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.747340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.757440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.757484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.757497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.757504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.757510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.757524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.767485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.767570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.767590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.767597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.767604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.767621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.777492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.777580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.777594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.777601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.777608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.777623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.787512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.787558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.787573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.787579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.787586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.787600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.470 qpair failed and we were unable to recover it. 00:38:38.470 [2024-10-01 15:56:17.797526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.470 [2024-10-01 15:56:17.797588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.470 [2024-10-01 15:56:17.797603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.470 [2024-10-01 15:56:17.797614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.470 [2024-10-01 15:56:17.797623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.470 [2024-10-01 15:56:17.797638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.807541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.807600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.807614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.807621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.807627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.807641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.817470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.817519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.817534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.817541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.817548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.817562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.827604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.827650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.827664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.827673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.827680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.827693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.837639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.837682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.837695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.837702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.837709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.837723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.847673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.847720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.847734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.847741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.847747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.847761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.857668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.857719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.857732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.857739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.857745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.857759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.867714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.867758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.867771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.867778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.867784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.867798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.877724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.877769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.877782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.877789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.877796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.877810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.887636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.887681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.887699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.887706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.887712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.887726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.897828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.897875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.897889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.897901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.897907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.897922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.907800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.907842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.907856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.907862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.907868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.907883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.471 [2024-10-01 15:56:17.917852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.471 [2024-10-01 15:56:17.917903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.471 [2024-10-01 15:56:17.917917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.471 [2024-10-01 15:56:17.917924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.471 [2024-10-01 15:56:17.917931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.471 [2024-10-01 15:56:17.917945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.471 qpair failed and we were unable to recover it. 00:38:38.733 [2024-10-01 15:56:17.927892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.733 [2024-10-01 15:56:17.927946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.733 [2024-10-01 15:56:17.927959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.733 [2024-10-01 15:56:17.927966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.733 [2024-10-01 15:56:17.927973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.733 [2024-10-01 15:56:17.927991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.733 qpair failed and we were unable to recover it. 00:38:38.733 [2024-10-01 15:56:17.937774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.733 [2024-10-01 15:56:17.937826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.733 [2024-10-01 15:56:17.937839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.733 [2024-10-01 15:56:17.937846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.733 [2024-10-01 15:56:17.937852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.733 [2024-10-01 15:56:17.937866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.733 qpair failed and we were unable to recover it. 00:38:38.733 [2024-10-01 15:56:17.947928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.733 [2024-10-01 15:56:17.947974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.733 [2024-10-01 15:56:17.947988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.733 [2024-10-01 15:56:17.947994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.733 [2024-10-01 15:56:17.948001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.733 [2024-10-01 15:56:17.948015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.733 qpair failed and we were unable to recover it. 00:38:38.733 [2024-10-01 15:56:17.957947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.733 [2024-10-01 15:56:17.957991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.733 [2024-10-01 15:56:17.958005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.733 [2024-10-01 15:56:17.958011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.733 [2024-10-01 15:56:17.958018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.733 [2024-10-01 15:56:17.958032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.733 qpair failed and we were unable to recover it. 00:38:38.733 [2024-10-01 15:56:17.967978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.733 [2024-10-01 15:56:17.968069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.733 [2024-10-01 15:56:17.968083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.733 [2024-10-01 15:56:17.968089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.733 [2024-10-01 15:56:17.968096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.733 [2024-10-01 15:56:17.968109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.733 qpair failed and we were unable to recover it. 00:38:38.733 [2024-10-01 15:56:17.978040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.733 [2024-10-01 15:56:17.978091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.733 [2024-10-01 15:56:17.978107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.733 [2024-10-01 15:56:17.978114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.733 [2024-10-01 15:56:17.978120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.733 [2024-10-01 15:56:17.978134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.734 qpair failed and we were unable to recover it. 00:38:38.734 [2024-10-01 15:56:17.988043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.734 [2024-10-01 15:56:17.988086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.734 [2024-10-01 15:56:17.988099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.734 [2024-10-01 15:56:17.988106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.734 [2024-10-01 15:56:17.988112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23a4000b90 00:38:38.734 [2024-10-01 15:56:17.988126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:38.734 qpair failed and we were unable to recover it. 00:38:38.734 [2024-10-01 15:56:17.998066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.734 [2024-10-01 15:56:17.998179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.734 [2024-10-01 15:56:17.998243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.734 [2024-10-01 15:56:17.998267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.734 [2024-10-01 15:56:17.998289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2398000b90 00:38:38.734 [2024-10-01 15:56:17.998342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.734 qpair failed and we were unable to recover it. 00:38:38.734 [2024-10-01 15:56:18.008091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.734 [2024-10-01 15:56:18.008163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.734 [2024-10-01 15:56:18.008196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.734 [2024-10-01 15:56:18.008212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.734 [2024-10-01 15:56:18.008228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2398000b90 00:38:38.734 [2024-10-01 15:56:18.008262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.734 qpair failed and we were unable to recover it. 00:38:38.734 [2024-10-01 15:56:18.008417] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:38:38.734 A controller has encountered a failure and is being reset. 00:38:38.734 Controller properly reset. 00:38:38.734 Initializing NVMe Controllers 00:38:38.734 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:38.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:38.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:38.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:38.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:38.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:38.734 Initialization complete. Launching workers. 00:38:38.734 Starting thread on core 1 00:38:38.734 Starting thread on core 2 00:38:38.734 Starting thread on core 3 00:38:38.734 Starting thread on core 0 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:38.734 00:38:38.734 real 0m11.384s 00:38:38.734 user 0m21.951s 00:38:38.734 sys 0m3.803s 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:38.734 ************************************ 00:38:38.734 END TEST nvmf_target_disconnect_tc2 00:38:38.734 ************************************ 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.734 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.734 rmmod nvme_tcp 00:38:38.734 rmmod nvme_fabrics 00:38:38.995 rmmod nvme_keyring 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 3411606 ']' 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 3411606 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3411606 ']' 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3411606 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3411606 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3411606' 00:38:38.995 killing process with pid 3411606 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3411606 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3411606 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.995 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.996 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.996 15:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.542 15:56:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.542 00:38:41.542 real 0m21.663s 00:38:41.542 user 0m49.963s 00:38:41.542 sys 0m9.880s 00:38:41.542 15:56:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.542 15:56:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.542 ************************************ 00:38:41.542 END TEST nvmf_target_disconnect 00:38:41.542 ************************************ 00:38:41.542 15:56:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:41.542 00:38:41.542 real 7m56.180s 00:38:41.542 user 17m18.400s 00:38:41.542 sys 2m27.298s 00:38:41.542 15:56:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.542 15:56:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.542 ************************************ 00:38:41.542 END TEST nvmf_host 00:38:41.542 ************************************ 00:38:41.542 15:56:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:41.542 15:56:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:41.542 15:56:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:41.542 15:56:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:41.542 15:56:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.542 15:56:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:41.542 ************************************ 00:38:41.542 START TEST nvmf_target_core_interrupt_mode 00:38:41.542 ************************************ 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:41.543 * Looking for test storage... 00:38:41.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:41.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.543 --rc genhtml_branch_coverage=1 00:38:41.543 --rc genhtml_function_coverage=1 00:38:41.543 --rc genhtml_legend=1 00:38:41.543 --rc geninfo_all_blocks=1 00:38:41.543 --rc geninfo_unexecuted_blocks=1 00:38:41.543 00:38:41.543 ' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:41.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.543 --rc genhtml_branch_coverage=1 00:38:41.543 --rc genhtml_function_coverage=1 00:38:41.543 --rc genhtml_legend=1 00:38:41.543 --rc geninfo_all_blocks=1 00:38:41.543 --rc geninfo_unexecuted_blocks=1 00:38:41.543 00:38:41.543 ' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:41.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.543 --rc genhtml_branch_coverage=1 00:38:41.543 --rc genhtml_function_coverage=1 00:38:41.543 --rc genhtml_legend=1 00:38:41.543 --rc geninfo_all_blocks=1 00:38:41.543 --rc geninfo_unexecuted_blocks=1 00:38:41.543 00:38:41.543 ' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:41.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.543 --rc genhtml_branch_coverage=1 00:38:41.543 --rc genhtml_function_coverage=1 00:38:41.543 --rc genhtml_legend=1 00:38:41.543 --rc geninfo_all_blocks=1 00:38:41.543 --rc geninfo_unexecuted_blocks=1 00:38:41.543 00:38:41.543 ' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:41.543 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:41.544 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:41.544 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:41.544 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:41.544 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.544 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.544 ************************************ 00:38:41.544 START TEST nvmf_abort 00:38:41.544 ************************************ 00:38:41.544 15:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:41.806 * Looking for test storage... 00:38:41.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.806 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:41.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.806 --rc genhtml_branch_coverage=1 00:38:41.807 --rc genhtml_function_coverage=1 00:38:41.807 --rc genhtml_legend=1 00:38:41.807 --rc geninfo_all_blocks=1 00:38:41.807 --rc geninfo_unexecuted_blocks=1 00:38:41.807 00:38:41.807 ' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:41.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.807 --rc genhtml_branch_coverage=1 00:38:41.807 --rc genhtml_function_coverage=1 00:38:41.807 --rc genhtml_legend=1 00:38:41.807 --rc geninfo_all_blocks=1 00:38:41.807 --rc geninfo_unexecuted_blocks=1 00:38:41.807 00:38:41.807 ' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:41.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.807 --rc genhtml_branch_coverage=1 00:38:41.807 --rc genhtml_function_coverage=1 00:38:41.807 --rc genhtml_legend=1 00:38:41.807 --rc geninfo_all_blocks=1 00:38:41.807 --rc geninfo_unexecuted_blocks=1 00:38:41.807 00:38:41.807 ' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:41.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.807 --rc genhtml_branch_coverage=1 00:38:41.807 --rc genhtml_function_coverage=1 00:38:41.807 --rc genhtml_legend=1 00:38:41.807 --rc geninfo_all_blocks=1 00:38:41.807 --rc geninfo_unexecuted_blocks=1 00:38:41.807 00:38:41.807 ' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.807 15:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.945 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:49.946 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:49.946 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:49.946 Found net devices under 0000:31:00.0: cvl_0_0 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:49.946 Found net devices under 0000:31:00.1: cvl_0_1 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:49.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:38:49.946 00:38:49.946 --- 10.0.0.2 ping statistics --- 00:38:49.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.946 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:38:49.946 00:38:49.946 --- 10.0.0.1 ping statistics --- 00:38:49.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.946 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3417114 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3417114 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3417114 ']' 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.946 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:49.947 15:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:49.947 [2024-10-01 15:56:28.622756] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:49.947 [2024-10-01 15:56:28.623731] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:38:49.947 [2024-10-01 15:56:28.623767] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.947 [2024-10-01 15:56:28.660291] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:49.947 [2024-10-01 15:56:28.706734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:49.947 [2024-10-01 15:56:28.738462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.947 [2024-10-01 15:56:28.738496] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.947 [2024-10-01 15:56:28.738504] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.947 [2024-10-01 15:56:28.738511] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.947 [2024-10-01 15:56:28.738517] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.947 [2024-10-01 15:56:28.738661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:49.947 [2024-10-01 15:56:28.738794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.947 [2024-10-01 15:56:28.738796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:49.947 [2024-10-01 15:56:28.802279] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:49.947 [2024-10-01 15:56:28.803306] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:49.947 [2024-10-01 15:56:28.803846] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:49.947 [2024-10-01 15:56:28.804027] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.207 [2024-10-01 15:56:29.463653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.207 Malloc0 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.207 Delay0 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.207 [2024-10-01 15:56:29.551658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:50.207 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.208 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.208 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.208 15:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:50.468 [2024-10-01 15:56:29.683677] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:52.375 Initializing NVMe Controllers 00:38:52.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:52.375 controller IO queue size 128 less than required 00:38:52.375 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:52.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:52.375 Initialization complete. Launching workers. 00:38:52.375 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28921 00:38:52.375 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28978, failed to submit 66 00:38:52.375 success 28921, unsuccessful 57, failed 0 00:38:52.375 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:52.376 rmmod nvme_tcp 00:38:52.376 rmmod nvme_fabrics 00:38:52.376 rmmod nvme_keyring 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3417114 ']' 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3417114 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3417114 ']' 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3417114 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:52.376 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3417114 00:38:52.635 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:52.635 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:52.635 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3417114' 00:38:52.635 killing process with pid 3417114 00:38:52.635 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3417114 00:38:52.635 15:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3417114 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.635 15:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:55.176 00:38:55.176 real 0m13.204s 00:38:55.176 user 0m10.728s 00:38:55.176 sys 0m6.777s 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:55.176 ************************************ 00:38:55.176 END TEST nvmf_abort 00:38:55.176 ************************************ 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:55.176 ************************************ 00:38:55.176 START TEST nvmf_ns_hotplug_stress 00:38:55.176 ************************************ 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:55.176 * Looking for test storage... 00:38:55.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:55.176 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.177 --rc genhtml_branch_coverage=1 00:38:55.177 --rc genhtml_function_coverage=1 00:38:55.177 --rc genhtml_legend=1 00:38:55.177 --rc geninfo_all_blocks=1 00:38:55.177 --rc geninfo_unexecuted_blocks=1 00:38:55.177 00:38:55.177 ' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.177 --rc genhtml_branch_coverage=1 00:38:55.177 --rc genhtml_function_coverage=1 00:38:55.177 --rc genhtml_legend=1 00:38:55.177 --rc geninfo_all_blocks=1 00:38:55.177 --rc geninfo_unexecuted_blocks=1 00:38:55.177 00:38:55.177 ' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.177 --rc genhtml_branch_coverage=1 00:38:55.177 --rc genhtml_function_coverage=1 00:38:55.177 --rc genhtml_legend=1 00:38:55.177 --rc geninfo_all_blocks=1 00:38:55.177 --rc geninfo_unexecuted_blocks=1 00:38:55.177 00:38:55.177 ' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.177 --rc genhtml_branch_coverage=1 00:38:55.177 --rc genhtml_function_coverage=1 00:38:55.177 --rc genhtml_legend=1 00:38:55.177 --rc geninfo_all_blocks=1 00:38:55.177 --rc geninfo_unexecuted_blocks=1 00:38:55.177 00:38:55.177 ' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:55.177 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:55.178 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:55.178 15:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:03.321 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:03.321 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:03.321 Found net devices under 0000:31:00.0: cvl_0_0 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:03.321 Found net devices under 0000:31:00.1: cvl_0_1 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:03.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:03.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:39:03.321 00:39:03.321 --- 10.0.0.2 ping statistics --- 00:39:03.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.321 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:03.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:03.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:39:03.321 00:39:03.321 --- 10.0.0.1 ping statistics --- 00:39:03.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.321 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3422071 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3422071 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3422071 ']' 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:03.321 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:03.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:03.322 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:03.322 15:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:03.322 [2024-10-01 15:56:41.921914] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:03.322 [2024-10-01 15:56:41.923042] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:39:03.322 [2024-10-01 15:56:41.923093] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:03.322 [2024-10-01 15:56:41.964691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:03.322 [2024-10-01 15:56:42.012815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:03.322 [2024-10-01 15:56:42.060462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:03.322 [2024-10-01 15:56:42.060512] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:03.322 [2024-10-01 15:56:42.060521] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:03.322 [2024-10-01 15:56:42.060528] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:03.322 [2024-10-01 15:56:42.060534] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:03.322 [2024-10-01 15:56:42.060693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:03.322 [2024-10-01 15:56:42.060848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.322 [2024-10-01 15:56:42.060849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:03.322 [2024-10-01 15:56:42.131336] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:03.322 [2024-10-01 15:56:42.132322] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:03.322 [2024-10-01 15:56:42.133016] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:03.322 [2024-10-01 15:56:42.133131] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:03.322 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.322 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:39:03.322 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:03.322 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:03.322 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:03.581 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.581 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:03.581 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:03.581 [2024-10-01 15:56:42.941814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.581 15:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:03.841 15:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:03.841 [2024-10-01 15:56:43.294353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:04.101 15:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:04.101 15:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:04.360 Malloc0 00:39:04.360 15:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:04.360 Delay0 00:39:04.360 15:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.620 15:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:04.880 NULL1 00:39:04.880 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:04.880 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3422533 00:39:04.880 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:04.880 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:04.880 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.140 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.400 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:05.400 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:05.400 true 00:39:05.661 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:05.661 15:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.661 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.982 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:05.982 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:06.352 true 00:39:06.352 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:06.352 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.352 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.634 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:06.634 15:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:06.634 true 00:39:06.634 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:06.634 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.893 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.154 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:07.154 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:07.154 true 00:39:07.414 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:07.414 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.414 15:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.674 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:07.674 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:07.935 true 00:39:07.935 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:07.935 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.935 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.196 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:08.196 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:08.456 true 00:39:08.456 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:08.456 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.716 15:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.716 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:08.716 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:08.977 true 00:39:08.977 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:08.977 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.237 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.237 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:09.237 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:09.499 true 00:39:09.499 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:09.499 15:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.759 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.021 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:10.021 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:10.021 true 00:39:10.021 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:10.021 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.281 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.541 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:10.541 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:10.541 true 00:39:10.541 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:10.541 15:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.801 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.062 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:11.062 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:11.062 true 00:39:11.322 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:11.322 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.322 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.582 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:11.582 15:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:11.582 true 00:39:11.842 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:11.842 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.842 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.102 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:12.102 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:12.361 true 00:39:12.361 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:12.361 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.361 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.623 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:12.623 15:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:12.884 true 00:39:12.884 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:12.884 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.144 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.144 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:13.144 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:13.406 true 00:39:13.406 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:13.406 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.667 15:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.667 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:13.667 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:13.928 true 00:39:13.928 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:13.928 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.188 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.449 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:14.449 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:14.449 true 00:39:14.449 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:14.449 15:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.709 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.969 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:14.969 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:14.969 true 00:39:14.969 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:14.969 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.228 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.488 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:15.488 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:15.748 true 00:39:15.748 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:15.748 15:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.748 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.008 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:16.008 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:16.269 true 00:39:16.269 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:16.269 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.269 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.530 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:16.530 15:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:16.791 true 00:39:16.791 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:16.791 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.791 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.052 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:17.052 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:17.312 true 00:39:17.312 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:17.312 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.573 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.573 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:17.573 15:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:17.832 true 00:39:17.832 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:17.832 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.093 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.093 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:18.093 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:18.354 true 00:39:18.354 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:18.354 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.615 15:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.876 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:18.876 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:18.876 true 00:39:18.876 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:18.876 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.136 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.396 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:19.396 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:19.396 true 00:39:19.396 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:19.396 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.658 15:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.918 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:19.918 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:19.918 true 00:39:19.918 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:19.918 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.178 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.439 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:20.439 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:20.439 true 00:39:20.700 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:20.700 15:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.700 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.960 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:20.960 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:20.960 true 00:39:21.221 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:21.221 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.221 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.481 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:21.481 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:21.742 true 00:39:21.742 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:21.742 15:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.742 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.003 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:22.003 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:22.263 true 00:39:22.263 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:22.263 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.264 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.524 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:22.524 15:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:22.784 true 00:39:22.784 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:22.784 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.784 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.043 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:23.043 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:23.304 true 00:39:23.304 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:23.304 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.565 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.565 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:23.565 15:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:23.828 true 00:39:23.828 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:23.828 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.090 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.090 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:24.090 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:24.350 true 00:39:24.350 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:24.350 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.610 15:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.870 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:24.870 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:24.870 true 00:39:24.870 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:24.870 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.130 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.388 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:25.388 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:25.388 true 00:39:25.388 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:25.388 15:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.648 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.908 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:25.908 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:26.169 true 00:39:26.169 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:26.169 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.169 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.428 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:26.428 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:26.688 true 00:39:26.688 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:26.688 15:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.947 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.947 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:26.947 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:27.207 true 00:39:27.207 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:27.207 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.466 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.466 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:27.466 15:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:27.725 true 00:39:27.725 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:27.725 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.984 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.243 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:28.243 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:28.243 true 00:39:28.243 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:28.244 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.503 15:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.763 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:28.764 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:28.764 true 00:39:28.764 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:28.764 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.024 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.284 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:29.284 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:29.284 true 00:39:29.284 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:29.284 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.545 15:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.805 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:29.805 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:29.805 true 00:39:29.805 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:29.805 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.065 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.326 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:30.326 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:30.326 true 00:39:30.326 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:30.326 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.586 15:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.847 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:30.847 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:30.847 true 00:39:30.847 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:30.847 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.107 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:31.368 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:39:31.368 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:39:31.368 true 00:39:31.628 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:31.628 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.628 15:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:31.888 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:39:31.888 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:39:31.888 true 00:39:32.148 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:32.148 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.148 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.409 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:39:32.409 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:39:32.670 true 00:39:32.670 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:32.670 15:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.670 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.930 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:39:32.930 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:39:33.190 true 00:39:33.190 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:33.190 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.190 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.450 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:39:33.450 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:39:33.710 true 00:39:33.710 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:33.710 15:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.969 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.969 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:39:33.969 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:39:34.228 true 00:39:34.228 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:34.228 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.488 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:34.488 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:39:34.488 15:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:39:34.748 true 00:39:34.748 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:34.748 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.009 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:35.009 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:39:35.009 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:39:35.269 true 00:39:35.269 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:35.269 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.269 Initializing NVMe Controllers 00:39:35.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:35.269 Controller IO queue size 128, less than required. 00:39:35.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:35.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:35.269 Initialization complete. Launching workers. 00:39:35.269 ======================================================== 00:39:35.269 Latency(us) 00:39:35.269 Device Information : IOPS MiB/s Average min max 00:39:35.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30724.77 15.00 4165.92 1134.52 10944.63 00:39:35.269 ======================================================== 00:39:35.269 Total : 30724.77 15.00 4165.92 1134.52 10944.63 00:39:35.269 00:39:35.529 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:35.811 15:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:39:35.811 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:39:35.811 true 00:39:35.811 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3422533 00:39:35.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3422533) - No such process 00:39:35.811 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3422533 00:39:35.811 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.072 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:36.331 null0 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:36.331 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:36.590 null1 00:39:36.590 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:36.590 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:36.590 15:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:36.590 null2 00:39:36.849 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:36.849 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:36.849 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:36.849 null3 00:39:36.849 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:36.849 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:36.849 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:37.110 null4 00:39:37.110 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.110 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.110 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:37.110 null5 00:39:37.110 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.110 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.110 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:37.369 null6 00:39:37.369 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.369 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.369 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:37.631 null7 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.631 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3429291 3429293 3429294 3429296 3429298 3429300 3429302 3429304 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.632 15:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:37.632 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.632 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.891 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.892 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.152 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.411 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.411 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.412 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:38.672 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.672 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.672 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.672 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:38.672 15:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:38.672 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.932 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.193 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.455 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.714 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.715 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.715 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.715 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:39.715 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.715 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.715 15:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.715 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.975 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.976 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.237 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.496 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.497 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:40.497 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:40.756 15:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:40.756 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.016 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.017 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.277 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:41.538 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:41.539 rmmod nvme_tcp 00:39:41.539 rmmod nvme_fabrics 00:39:41.539 rmmod nvme_keyring 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3422071 ']' 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3422071 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3422071 ']' 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3422071 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3422071 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3422071' 00:39:41.539 killing process with pid 3422071 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3422071 00:39:41.539 15:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3422071 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.800 15:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.713 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:43.713 00:39:43.713 real 0m48.925s 00:39:43.713 user 3m3.213s 00:39:43.713 sys 0m23.290s 00:39:43.713 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:43.713 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:43.713 ************************************ 00:39:43.713 END TEST nvmf_ns_hotplug_stress 00:39:43.713 ************************************ 00:39:43.713 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:43.713 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:43.713 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:43.713 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:43.975 ************************************ 00:39:43.975 START TEST nvmf_delete_subsystem 00:39:43.975 ************************************ 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:43.975 * Looking for test storage... 00:39:43.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.975 --rc genhtml_branch_coverage=1 00:39:43.975 --rc genhtml_function_coverage=1 00:39:43.975 --rc genhtml_legend=1 00:39:43.975 --rc geninfo_all_blocks=1 00:39:43.975 --rc geninfo_unexecuted_blocks=1 00:39:43.975 00:39:43.975 ' 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.975 --rc genhtml_branch_coverage=1 00:39:43.975 --rc genhtml_function_coverage=1 00:39:43.975 --rc genhtml_legend=1 00:39:43.975 --rc geninfo_all_blocks=1 00:39:43.975 --rc geninfo_unexecuted_blocks=1 00:39:43.975 00:39:43.975 ' 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.975 --rc genhtml_branch_coverage=1 00:39:43.975 --rc genhtml_function_coverage=1 00:39:43.975 --rc genhtml_legend=1 00:39:43.975 --rc geninfo_all_blocks=1 00:39:43.975 --rc geninfo_unexecuted_blocks=1 00:39:43.975 00:39:43.975 ' 00:39:43.975 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.975 --rc genhtml_branch_coverage=1 00:39:43.976 --rc genhtml_function_coverage=1 00:39:43.976 --rc genhtml_legend=1 00:39:43.976 --rc geninfo_all_blocks=1 00:39:43.976 --rc geninfo_unexecuted_blocks=1 00:39:43.976 00:39:43.976 ' 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:43.976 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:44.237 15:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.377 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:52.377 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:52.378 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:52.378 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:52.378 Found net devices under 0000:31:00.0: cvl_0_0 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:52.378 Found net devices under 0000:31:00.1: cvl_0_1 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:52.378 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:52.379 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:52.379 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:52.379 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:52.379 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:52.379 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:52.379 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:52.379 15:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:52.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:52.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:39:52.379 00:39:52.379 --- 10.0.0.2 ping statistics --- 00:39:52.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.379 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:52.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:52.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:39:52.379 00:39:52.379 --- 10.0.0.1 ping statistics --- 00:39:52.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.379 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3434515 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3434515 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3434515 ']' 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:52.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:52.379 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.379 [2024-10-01 15:57:31.125168] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:52.379 [2024-10-01 15:57:31.126215] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:39:52.379 [2024-10-01 15:57:31.126256] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:52.379 [2024-10-01 15:57:31.163428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:52.379 [2024-10-01 15:57:31.211720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:52.379 [2024-10-01 15:57:31.243100] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:52.379 [2024-10-01 15:57:31.243135] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:52.379 [2024-10-01 15:57:31.243149] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:52.379 [2024-10-01 15:57:31.243158] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:52.379 [2024-10-01 15:57:31.243166] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:52.379 [2024-10-01 15:57:31.243312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:52.379 [2024-10-01 15:57:31.243315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.379 [2024-10-01 15:57:31.291443] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:52.379 [2024-10-01 15:57:31.292051] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:52.379 [2024-10-01 15:57:31.292365] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.640 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.641 [2024-10-01 15:57:31.960230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.641 15:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.641 [2024-10-01 15:57:32.000786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.641 NULL1 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.641 Delay0 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3434610 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:52.641 15:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:52.901 [2024-10-01 15:57:32.107821] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:54.816 15:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:54.816 15:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.816 15:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 Read completed with error (sct=0, sc=8) 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.077 starting I/O failed: -6 00:39:55.077 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 [2024-10-01 15:57:34.312215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10de0f0 is same with the state(6) to be set 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 starting I/O failed: -6 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Write completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:55.078 Read completed with error (sct=0, sc=8) 00:39:56.022 [2024-10-01 15:57:35.294518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e2e20 is same with the state(6) to be set 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 [2024-10-01 15:57:35.315907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ddf10 is same with the state(6) to be set 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 [2024-10-01 15:57:35.316420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10de2d0 is same with the state(6) to be set 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 [2024-10-01 15:57:35.318157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcc9000d640 is same with the state(6) to be set 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 Write completed with error (sct=0, sc=8) 00:39:56.022 Read completed with error (sct=0, sc=8) 00:39:56.022 [2024-10-01 15:57:35.318652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcc9000cfe0 is same with the state(6) to be set 00:39:56.022 Initializing NVMe Controllers 00:39:56.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:56.022 Controller IO queue size 128, less than required. 00:39:56.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:56.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:56.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:56.022 Initialization complete. Launching workers. 00:39:56.022 ======================================================== 00:39:56.022 Latency(us) 00:39:56.022 Device Information : IOPS MiB/s Average min max 00:39:56.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.17 0.08 888833.38 346.22 1008173.69 00:39:56.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.71 0.08 908027.44 305.47 1010482.14 00:39:56.022 ======================================================== 00:39:56.022 Total : 337.87 0.16 898190.13 305.47 1010482.14 00:39:56.022 00:39:56.022 [2024-10-01 15:57:35.319083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e2e20 (9): Bad file descriptor 00:39:56.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:56.022 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.022 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:56.022 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3434610 00:39:56.022 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:56.593 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:56.593 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3434610 00:39:56.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3434610) - No such process 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3434610 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3434610 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3434610 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.594 [2024-10-01 15:57:35.852747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3435378 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:56.594 15:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:56.594 [2024-10-01 15:57:35.940762] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:57.164 15:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:57.164 15:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:57.164 15:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:57.737 15:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:57.737 15:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:57.737 15:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:57.998 15:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:57.998 15:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:57.998 15:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:58.570 15:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:58.570 15:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:58.570 15:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:59.140 15:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:59.140 15:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:59.140 15:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:59.717 15:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:59.717 15:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:59.717 15:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:59.717 Initializing NVMe Controllers 00:39:59.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:59.717 Controller IO queue size 128, less than required. 00:39:59.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:59.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:59.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:59.717 Initialization complete. Launching workers. 00:39:59.717 ======================================================== 00:39:59.717 Latency(us) 00:39:59.717 Device Information : IOPS MiB/s Average min max 00:39:59.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002177.58 1000139.23 1006191.08 00:39:59.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004907.66 1000362.55 1043928.89 00:39:59.717 ======================================================== 00:39:59.718 Total : 256.00 0.12 1003542.62 1000139.23 1043928.89 00:39:59.718 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3435378 00:39:59.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3435378) - No such process 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3435378 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:59.979 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:59.979 rmmod nvme_tcp 00:40:00.239 rmmod nvme_fabrics 00:40:00.239 rmmod nvme_keyring 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3434515 ']' 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3434515 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3434515 ']' 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3434515 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3434515 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3434515' 00:40:00.239 killing process with pid 3434515 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3434515 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3434515 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.239 15:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.785 00:40:02.785 real 0m18.538s 00:40:02.785 user 0m26.560s 00:40:02.785 sys 0m7.779s 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:02.785 ************************************ 00:40:02.785 END TEST nvmf_delete_subsystem 00:40:02.785 ************************************ 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:02.785 ************************************ 00:40:02.785 START TEST nvmf_host_management 00:40:02.785 ************************************ 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:02.785 * Looking for test storage... 00:40:02.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:02.785 15:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:02.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.785 --rc genhtml_branch_coverage=1 00:40:02.785 --rc genhtml_function_coverage=1 00:40:02.785 --rc genhtml_legend=1 00:40:02.785 --rc geninfo_all_blocks=1 00:40:02.785 --rc geninfo_unexecuted_blocks=1 00:40:02.785 00:40:02.785 ' 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:02.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.785 --rc genhtml_branch_coverage=1 00:40:02.785 --rc genhtml_function_coverage=1 00:40:02.785 --rc genhtml_legend=1 00:40:02.785 --rc geninfo_all_blocks=1 00:40:02.785 --rc geninfo_unexecuted_blocks=1 00:40:02.785 00:40:02.785 ' 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:02.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.785 --rc genhtml_branch_coverage=1 00:40:02.785 --rc genhtml_function_coverage=1 00:40:02.785 --rc genhtml_legend=1 00:40:02.785 --rc geninfo_all_blocks=1 00:40:02.785 --rc geninfo_unexecuted_blocks=1 00:40:02.785 00:40:02.785 ' 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:02.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.785 --rc genhtml_branch_coverage=1 00:40:02.785 --rc genhtml_function_coverage=1 00:40:02.785 --rc genhtml_legend=1 00:40:02.785 --rc geninfo_all_blocks=1 00:40:02.785 --rc geninfo_unexecuted_blocks=1 00:40:02.785 00:40:02.785 ' 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.785 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:02.786 15:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:10.922 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:10.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:10.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:10.923 Found net devices under 0000:31:00.0: cvl_0_0 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:10.923 Found net devices under 0000:31:00.1: cvl_0_1 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:10.923 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:10.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:10.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:40:10.924 00:40:10.924 --- 10.0.0.2 ping statistics --- 00:40:10.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.924 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:10.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:10.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:40:10.924 00:40:10.924 --- 10.0.0.1 ping statistics --- 00:40:10.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.924 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3440274 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3440274 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3440274 ']' 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:10.924 15:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:10.924 [2024-10-01 15:57:49.595199] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:10.924 [2024-10-01 15:57:49.596354] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:40:10.924 [2024-10-01 15:57:49.596410] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:10.924 [2024-10-01 15:57:49.638621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:10.924 [2024-10-01 15:57:49.686561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:10.924 [2024-10-01 15:57:49.734986] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.924 [2024-10-01 15:57:49.735039] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.924 [2024-10-01 15:57:49.735047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.924 [2024-10-01 15:57:49.735055] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.924 [2024-10-01 15:57:49.735061] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.924 [2024-10-01 15:57:49.735223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:10.924 [2024-10-01 15:57:49.735382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:10.924 [2024-10-01 15:57:49.735543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.924 [2024-10-01 15:57:49.735543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:10.924 [2024-10-01 15:57:49.806166] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.924 [2024-10-01 15:57:49.806622] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.924 [2024-10-01 15:57:49.807169] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:10.924 [2024-10-01 15:57:49.807686] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.924 [2024-10-01 15:57:49.807734] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.186 [2024-10-01 15:57:50.452531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.186 Malloc0 00:40:11.186 [2024-10-01 15:57:50.540846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3440495 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3440495 /var/tmp/bdevperf.sock 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3440495 ']' 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:11.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:11.186 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:11.187 { 00:40:11.187 "params": { 00:40:11.187 "name": "Nvme$subsystem", 00:40:11.187 "trtype": "$TEST_TRANSPORT", 00:40:11.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:11.187 "adrfam": "ipv4", 00:40:11.187 "trsvcid": "$NVMF_PORT", 00:40:11.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:11.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:11.187 "hdgst": ${hdgst:-false}, 00:40:11.187 "ddgst": ${ddgst:-false} 00:40:11.187 }, 00:40:11.187 "method": "bdev_nvme_attach_controller" 00:40:11.187 } 00:40:11.187 EOF 00:40:11.187 )") 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:40:11.187 15:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:11.187 "params": { 00:40:11.187 "name": "Nvme0", 00:40:11.187 "trtype": "tcp", 00:40:11.187 "traddr": "10.0.0.2", 00:40:11.187 "adrfam": "ipv4", 00:40:11.187 "trsvcid": "4420", 00:40:11.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:11.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:11.187 "hdgst": false, 00:40:11.187 "ddgst": false 00:40:11.187 }, 00:40:11.187 "method": "bdev_nvme_attach_controller" 00:40:11.187 }' 00:40:11.447 [2024-10-01 15:57:50.650727] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:40:11.447 [2024-10-01 15:57:50.650799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440495 ] 00:40:11.447 [2024-10-01 15:57:50.685643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:11.447 [2024-10-01 15:57:50.737694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.447 [2024-10-01 15:57:50.785587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.709 Running I/O for 10 seconds... 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.282 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.282 [2024-10-01 15:57:51.557542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.282 [2024-10-01 15:57:51.557615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.282 [2024-10-01 15:57:51.557625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.282 [2024-10-01 15:57:51.557640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.282 [2024-10-01 15:57:51.557648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.282 [2024-10-01 15:57:51.557656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.282 [2024-10-01 15:57:51.557664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.557999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fe90 is same with the state(6) to be set 00:40:12.283 [2024-10-01 15:57:51.558267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.283 [2024-10-01 15:57:51.558570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.283 [2024-10-01 15:57:51.558582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.558972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.558988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.284 [2024-10-01 15:57:51.559546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.284 [2024-10-01 15:57:51.559561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.559981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.559996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.560009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.560039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.560067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.560096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.560127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.560155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.285 [2024-10-01 15:57:51.560183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19720d0 is same with the state(6) to be set 00:40:12.285 [2024-10-01 15:57:51.560286] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19720d0 was disconnected and freed. reset controller. 00:40:12.285 [2024-10-01 15:57:51.560372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.285 [2024-10-01 15:57:51.560389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.285 [2024-10-01 15:57:51.560418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.285 [2024-10-01 15:57:51.560446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.285 [2024-10-01 15:57:51.560472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.285 [2024-10-01 15:57:51.560486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758e50 is same with the state(6) to be set 00:40:12.285 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.285 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:12.285 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.285 [2024-10-01 15:57:51.562213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:12.285 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.285 task offset: 98304 on job bdev=Nvme0n1 fails 00:40:12.285 00:40:12.285 Latency(us) 00:40:12.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:12.285 Job: Nvme0n1 ended in about 0.58 seconds with error 00:40:12.285 Verification LBA range: start 0x0 length 0x400 00:40:12.285 Nvme0n1 : 0.58 1315.37 82.21 109.61 0.00 43863.65 5543.25 39321.60 00:40:12.285 =================================================================================================================== 00:40:12.285 Total : 1315.37 82.21 109.61 0.00 43863.65 5543.25 39321.60 00:40:12.285 [2024-10-01 15:57:51.565032] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:12.285 [2024-10-01 15:57:51.565084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1758e50 (9): Bad file descriptor 00:40:12.286 [2024-10-01 15:57:51.566787] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:40:12.286 [2024-10-01 15:57:51.566923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:12.286 [2024-10-01 15:57:51.566957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.286 [2024-10-01 15:57:51.566985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:40:12.286 [2024-10-01 15:57:51.566999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:40:12.286 [2024-10-01 15:57:51.567011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.286 [2024-10-01 15:57:51.567023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1758e50 00:40:12.286 [2024-10-01 15:57:51.567059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1758e50 (9): Bad file descriptor 00:40:12.286 [2024-10-01 15:57:51.567080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:12.286 [2024-10-01 15:57:51.567093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:12.286 [2024-10-01 15:57:51.567111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:12.286 [2024-10-01 15:57:51.567135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.286 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.286 15:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3440495 00:40:13.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3440495) - No such process 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:13.228 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:13.228 { 00:40:13.228 "params": { 00:40:13.228 "name": "Nvme$subsystem", 00:40:13.228 "trtype": "$TEST_TRANSPORT", 00:40:13.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.228 "adrfam": "ipv4", 00:40:13.228 "trsvcid": "$NVMF_PORT", 00:40:13.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.229 "hdgst": ${hdgst:-false}, 00:40:13.229 "ddgst": ${ddgst:-false} 00:40:13.229 }, 00:40:13.229 "method": "bdev_nvme_attach_controller" 00:40:13.229 } 00:40:13.229 EOF 00:40:13.229 )") 00:40:13.229 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:40:13.229 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:40:13.229 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:40:13.229 15:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:13.229 "params": { 00:40:13.229 "name": "Nvme0", 00:40:13.229 "trtype": "tcp", 00:40:13.229 "traddr": "10.0.0.2", 00:40:13.229 "adrfam": "ipv4", 00:40:13.229 "trsvcid": "4420", 00:40:13.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:13.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:13.229 "hdgst": false, 00:40:13.229 "ddgst": false 00:40:13.229 }, 00:40:13.229 "method": "bdev_nvme_attach_controller" 00:40:13.229 }' 00:40:13.229 [2024-10-01 15:57:52.636796] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:40:13.229 [2024-10-01 15:57:52.636876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440890 ] 00:40:13.229 [2024-10-01 15:57:52.671974] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:13.490 [2024-10-01 15:57:52.720437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.490 [2024-10-01 15:57:52.767274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.751 Running I/O for 1 seconds... 00:40:14.692 1613.00 IOPS, 100.81 MiB/s 00:40:14.692 Latency(us) 00:40:14.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:14.692 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:14.692 Verification LBA range: start 0x0 length 0x400 00:40:14.692 Nvme0n1 : 1.06 1588.36 99.27 0.00 0.00 37975.18 3099.31 46967.47 00:40:14.692 =================================================================================================================== 00:40:14.692 Total : 1588.36 99.27 0.00 0.00 37975.18 3099.31 46967.47 00:40:14.692 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:14.692 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:14.692 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:14.692 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:14.951 rmmod nvme_tcp 00:40:14.951 rmmod nvme_fabrics 00:40:14.951 rmmod nvme_keyring 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3440274 ']' 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3440274 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3440274 ']' 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3440274 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3440274 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3440274' 00:40:14.951 killing process with pid 3440274 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3440274 00:40:14.951 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3440274 00:40:15.211 [2024-10-01 15:57:54.407815] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.211 15:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:17.117 00:40:17.117 real 0m14.702s 00:40:17.117 user 0m19.343s 00:40:17.117 sys 0m7.603s 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:17.117 ************************************ 00:40:17.117 END TEST nvmf_host_management 00:40:17.117 ************************************ 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:17.117 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:17.378 ************************************ 00:40:17.378 START TEST nvmf_lvol 00:40:17.378 ************************************ 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:17.378 * Looking for test storage... 00:40:17.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.378 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.379 --rc genhtml_branch_coverage=1 00:40:17.379 --rc genhtml_function_coverage=1 00:40:17.379 --rc genhtml_legend=1 00:40:17.379 --rc geninfo_all_blocks=1 00:40:17.379 --rc geninfo_unexecuted_blocks=1 00:40:17.379 00:40:17.379 ' 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.379 --rc genhtml_branch_coverage=1 00:40:17.379 --rc genhtml_function_coverage=1 00:40:17.379 --rc genhtml_legend=1 00:40:17.379 --rc geninfo_all_blocks=1 00:40:17.379 --rc geninfo_unexecuted_blocks=1 00:40:17.379 00:40:17.379 ' 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.379 --rc genhtml_branch_coverage=1 00:40:17.379 --rc genhtml_function_coverage=1 00:40:17.379 --rc genhtml_legend=1 00:40:17.379 --rc geninfo_all_blocks=1 00:40:17.379 --rc geninfo_unexecuted_blocks=1 00:40:17.379 00:40:17.379 ' 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.379 --rc genhtml_branch_coverage=1 00:40:17.379 --rc genhtml_function_coverage=1 00:40:17.379 --rc genhtml_legend=1 00:40:17.379 --rc geninfo_all_blocks=1 00:40:17.379 --rc geninfo_unexecuted_blocks=1 00:40:17.379 00:40:17.379 ' 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:17.379 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:17.640 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:26.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:26.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:26.019 Found net devices under 0000:31:00.0: cvl_0_0 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:26.019 Found net devices under 0000:31:00.1: cvl_0_1 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:26.019 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:26.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:26.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:40:26.020 00:40:26.020 --- 10.0.0.2 ping statistics --- 00:40:26.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:26.020 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:26.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:26.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:40:26.020 00:40:26.020 --- 10.0.0.1 ping statistics --- 00:40:26.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:26.020 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3445411 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3445411 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3445411 ']' 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:26.020 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:26.020 [2024-10-01 15:58:04.421686] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:26.020 [2024-10-01 15:58:04.422663] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:40:26.020 [2024-10-01 15:58:04.422700] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:26.020 [2024-10-01 15:58:04.459010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:26.020 [2024-10-01 15:58:04.505643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:26.020 [2024-10-01 15:58:04.537548] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.020 [2024-10-01 15:58:04.537583] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.020 [2024-10-01 15:58:04.537595] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.020 [2024-10-01 15:58:04.537604] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.020 [2024-10-01 15:58:04.537611] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.020 [2024-10-01 15:58:04.537754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.020 [2024-10-01 15:58:04.537919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:26.020 [2024-10-01 15:58:04.537921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.020 [2024-10-01 15:58:04.600288] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:26.020 [2024-10-01 15:58:04.601296] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:26.020 [2024-10-01 15:58:04.601670] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:26.020 [2024-10-01 15:58:04.601862] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:26.020 [2024-10-01 15:58:05.398817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:26.020 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.281 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:26.281 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.542 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:26.542 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:26.803 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:26.803 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=082ba103-ba47-4736-91c1-ab14f5038afa 00:40:26.803 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 082ba103-ba47-4736-91c1-ab14f5038afa lvol 20 00:40:27.063 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9eabad53-ae0f-4ec5-aa35-3afd90c36389 00:40:27.063 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:27.323 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9eabad53-ae0f-4ec5-aa35-3afd90c36389 00:40:27.324 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:27.584 [2024-10-01 15:58:06.922652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.584 15:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:27.844 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3445921 00:40:27.844 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:27.844 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:28.785 15:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9eabad53-ae0f-4ec5-aa35-3afd90c36389 MY_SNAPSHOT 00:40:29.047 15:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cf401d67-25ff-46cb-ad68-b78eb1357359 00:40:29.047 15:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9eabad53-ae0f-4ec5-aa35-3afd90c36389 30 00:40:29.308 15:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cf401d67-25ff-46cb-ad68-b78eb1357359 MY_CLONE 00:40:29.569 15:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=24604e0f-c801-49ca-b031-d774f4739a36 00:40:29.569 15:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 24604e0f-c801-49ca-b031-d774f4739a36 00:40:30.138 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3445921 00:40:38.274 Initializing NVMe Controllers 00:40:38.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:38.274 Controller IO queue size 128, less than required. 00:40:38.274 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:38.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:38.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:38.275 Initialization complete. Launching workers. 00:40:38.275 ======================================================== 00:40:38.275 Latency(us) 00:40:38.275 Device Information : IOPS MiB/s Average min max 00:40:38.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15818.70 61.79 8094.02 1559.56 72016.31 00:40:38.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15969.80 62.38 8017.27 1758.86 70322.15 00:40:38.275 ======================================================== 00:40:38.275 Total : 31788.50 124.17 8055.46 1559.56 72016.31 00:40:38.275 00:40:38.275 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:38.275 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9eabad53-ae0f-4ec5-aa35-3afd90c36389 00:40:38.275 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 082ba103-ba47-4736-91c1-ab14f5038afa 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:38.535 rmmod nvme_tcp 00:40:38.535 rmmod nvme_fabrics 00:40:38.535 rmmod nvme_keyring 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3445411 ']' 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3445411 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3445411 ']' 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3445411 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:38.535 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3445411 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3445411' 00:40:38.797 killing process with pid 3445411 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3445411 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3445411 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.797 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:41.347 00:40:41.347 real 0m23.653s 00:40:41.347 user 0m55.300s 00:40:41.347 sys 0m10.730s 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:41.347 ************************************ 00:40:41.347 END TEST nvmf_lvol 00:40:41.347 ************************************ 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:41.347 ************************************ 00:40:41.347 START TEST nvmf_lvs_grow 00:40:41.347 ************************************ 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:41.347 * Looking for test storage... 00:40:41.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.347 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:41.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.348 --rc genhtml_branch_coverage=1 00:40:41.348 --rc genhtml_function_coverage=1 00:40:41.348 --rc genhtml_legend=1 00:40:41.348 --rc geninfo_all_blocks=1 00:40:41.348 --rc geninfo_unexecuted_blocks=1 00:40:41.348 00:40:41.348 ' 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:41.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.348 --rc genhtml_branch_coverage=1 00:40:41.348 --rc genhtml_function_coverage=1 00:40:41.348 --rc genhtml_legend=1 00:40:41.348 --rc geninfo_all_blocks=1 00:40:41.348 --rc geninfo_unexecuted_blocks=1 00:40:41.348 00:40:41.348 ' 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:41.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.348 --rc genhtml_branch_coverage=1 00:40:41.348 --rc genhtml_function_coverage=1 00:40:41.348 --rc genhtml_legend=1 00:40:41.348 --rc geninfo_all_blocks=1 00:40:41.348 --rc geninfo_unexecuted_blocks=1 00:40:41.348 00:40:41.348 ' 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:41.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.348 --rc genhtml_branch_coverage=1 00:40:41.348 --rc genhtml_function_coverage=1 00:40:41.348 --rc genhtml_legend=1 00:40:41.348 --rc geninfo_all_blocks=1 00:40:41.348 --rc geninfo_unexecuted_blocks=1 00:40:41.348 00:40:41.348 ' 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:41.348 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:41.349 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:49.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:49.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:49.491 Found net devices under 0000:31:00.0: cvl_0_0 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:49.491 Found net devices under 0000:31:00.1: cvl_0_1 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.491 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:49.492 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:49.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:49.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:40:49.492 00:40:49.492 --- 10.0.0.2 ping statistics --- 00:40:49.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.492 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:49.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:49.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:40:49.492 00:40:49.492 --- 10.0.0.1 ping statistics --- 00:40:49.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.492 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3452184 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3452184 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3452184 ']' 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:49.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:49.492 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:49.492 [2024-10-01 15:58:28.212128] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:49.492 [2024-10-01 15:58:28.213096] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:40:49.492 [2024-10-01 15:58:28.213132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:49.492 [2024-10-01 15:58:28.249976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:49.492 [2024-10-01 15:58:28.299671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:49.492 [2024-10-01 15:58:28.330527] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:49.492 [2024-10-01 15:58:28.330564] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:49.492 [2024-10-01 15:58:28.330573] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:49.492 [2024-10-01 15:58:28.330582] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:49.492 [2024-10-01 15:58:28.330589] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:49.492 [2024-10-01 15:58:28.330610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.492 [2024-10-01 15:58:28.378297] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:49.492 [2024-10-01 15:58:28.378555] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:49.757 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:49.757 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:40:49.757 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:49.757 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:49.757 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:49.757 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:49.757 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:50.018 [2024-10-01 15:58:29.227434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:50.018 ************************************ 00:40:50.018 START TEST lvs_grow_clean 00:40:50.018 ************************************ 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:50.018 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:50.281 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:50.281 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:50.542 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7d7dde11-64e3-4749-b5c9-1994930d578f 00:40:50.542 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:40:50.542 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:50.542 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:50.542 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:50.542 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d7dde11-64e3-4749-b5c9-1994930d578f lvol 150 00:40:50.803 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05 00:40:50.803 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:50.803 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:50.803 [2024-10-01 15:58:30.255108] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:50.803 [2024-10-01 15:58:30.255256] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:51.064 true 00:40:51.064 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:40:51.064 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:51.064 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:51.064 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:51.324 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05 00:40:51.324 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:51.584 [2024-10-01 15:58:30.919638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:51.584 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3452760 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3452760 /var/tmp/bdevperf.sock 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3452760 ']' 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:51.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:51.845 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:51.845 [2024-10-01 15:58:31.132807] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:40:51.845 [2024-10-01 15:58:31.132861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452760 ] 00:40:51.845 [2024-10-01 15:58:31.163004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:51.845 [2024-10-01 15:58:31.210630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.845 [2024-10-01 15:58:31.244224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:52.106 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:52.106 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:40:52.106 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:52.367 Nvme0n1 00:40:52.367 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:52.367 [ 00:40:52.367 { 00:40:52.367 "name": "Nvme0n1", 00:40:52.367 "aliases": [ 00:40:52.367 "6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05" 00:40:52.367 ], 00:40:52.367 "product_name": "NVMe disk", 00:40:52.367 "block_size": 4096, 00:40:52.367 "num_blocks": 38912, 00:40:52.367 "uuid": "6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05", 00:40:52.367 "numa_id": 0, 00:40:52.367 "assigned_rate_limits": { 00:40:52.367 "rw_ios_per_sec": 0, 00:40:52.367 "rw_mbytes_per_sec": 0, 00:40:52.367 "r_mbytes_per_sec": 0, 00:40:52.367 "w_mbytes_per_sec": 0 00:40:52.367 }, 00:40:52.367 "claimed": false, 00:40:52.367 "zoned": false, 00:40:52.367 "supported_io_types": { 00:40:52.367 "read": true, 00:40:52.367 "write": true, 00:40:52.367 "unmap": true, 00:40:52.367 "flush": true, 00:40:52.367 "reset": true, 00:40:52.367 "nvme_admin": true, 00:40:52.367 "nvme_io": true, 00:40:52.367 "nvme_io_md": false, 00:40:52.367 "write_zeroes": true, 00:40:52.367 "zcopy": false, 00:40:52.367 "get_zone_info": false, 00:40:52.367 "zone_management": false, 00:40:52.367 "zone_append": false, 00:40:52.367 "compare": true, 00:40:52.367 "compare_and_write": true, 00:40:52.367 "abort": true, 00:40:52.367 "seek_hole": false, 00:40:52.367 "seek_data": false, 00:40:52.367 "copy": true, 00:40:52.367 "nvme_iov_md": false 00:40:52.367 }, 00:40:52.367 "memory_domains": [ 00:40:52.367 { 00:40:52.367 "dma_device_id": "system", 00:40:52.367 "dma_device_type": 1 00:40:52.367 } 00:40:52.367 ], 00:40:52.367 "driver_specific": { 00:40:52.367 "nvme": [ 00:40:52.367 { 00:40:52.367 "trid": { 00:40:52.367 "trtype": "TCP", 00:40:52.367 "adrfam": "IPv4", 00:40:52.367 "traddr": "10.0.0.2", 00:40:52.367 "trsvcid": "4420", 00:40:52.367 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:52.367 }, 00:40:52.367 "ctrlr_data": { 00:40:52.367 "cntlid": 1, 00:40:52.367 "vendor_id": "0x8086", 00:40:52.367 "model_number": "SPDK bdev Controller", 00:40:52.367 "serial_number": "SPDK0", 00:40:52.367 "firmware_revision": "25.01", 00:40:52.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:52.367 "oacs": { 00:40:52.367 "security": 0, 00:40:52.367 "format": 0, 00:40:52.367 "firmware": 0, 00:40:52.367 "ns_manage": 0 00:40:52.367 }, 00:40:52.367 "multi_ctrlr": true, 00:40:52.367 "ana_reporting": false 00:40:52.367 }, 00:40:52.367 "vs": { 00:40:52.367 "nvme_version": "1.3" 00:40:52.367 }, 00:40:52.367 "ns_data": { 00:40:52.367 "id": 1, 00:40:52.367 "can_share": true 00:40:52.367 } 00:40:52.367 } 00:40:52.367 ], 00:40:52.367 "mp_policy": "active_passive" 00:40:52.367 } 00:40:52.367 } 00:40:52.367 ] 00:40:52.627 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:52.627 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3452900 00:40:52.627 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:52.627 Running I/O for 10 seconds... 00:40:53.567 Latency(us) 00:40:53.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:53.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.567 Nvme0n1 : 1.00 17077.00 66.71 0.00 0.00 0.00 0.00 0.00 00:40:53.567 =================================================================================================================== 00:40:53.567 Total : 17077.00 66.71 0.00 0.00 0.00 0.00 0.00 00:40:53.567 00:40:54.509 15:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:40:54.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.509 Nvme0n1 : 2.00 17498.00 68.35 0.00 0.00 0.00 0.00 0.00 00:40:54.509 =================================================================================================================== 00:40:54.509 Total : 17498.00 68.35 0.00 0.00 0.00 0.00 0.00 00:40:54.509 00:40:54.771 true 00:40:54.771 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:40:54.771 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:55.032 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:55.032 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:55.032 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3452900 00:40:55.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:55.604 Nvme0n1 : 3.00 17382.67 67.90 0.00 0.00 0.00 0.00 0.00 00:40:55.604 =================================================================================================================== 00:40:55.604 Total : 17382.67 67.90 0.00 0.00 0.00 0.00 0.00 00:40:55.604 00:40:56.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.543 Nvme0n1 : 4.00 17517.00 68.43 0.00 0.00 0.00 0.00 0.00 00:40:56.543 =================================================================================================================== 00:40:56.543 Total : 17517.00 68.43 0.00 0.00 0.00 0.00 0.00 00:40:56.543 00:40:57.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:57.482 Nvme0n1 : 5.00 18903.20 73.84 0.00 0.00 0.00 0.00 0.00 00:40:57.482 =================================================================================================================== 00:40:57.482 Total : 18903.20 73.84 0.00 0.00 0.00 0.00 0.00 00:40:57.482 00:40:58.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:58.864 Nvme0n1 : 6.00 19987.67 78.08 0.00 0.00 0.00 0.00 0.00 00:40:58.864 =================================================================================================================== 00:40:58.864 Total : 19987.67 78.08 0.00 0.00 0.00 0.00 0.00 00:40:58.864 00:40:59.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:59.810 Nvme0n1 : 7.00 20760.57 81.10 0.00 0.00 0.00 0.00 0.00 00:40:59.810 =================================================================================================================== 00:40:59.810 Total : 20760.57 81.10 0.00 0.00 0.00 0.00 0.00 00:40:59.810 00:41:00.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:00.753 Nvme0n1 : 8.00 21341.38 83.36 0.00 0.00 0.00 0.00 0.00 00:41:00.753 =================================================================================================================== 00:41:00.753 Total : 21341.38 83.36 0.00 0.00 0.00 0.00 0.00 00:41:00.753 00:41:01.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:01.698 Nvme0n1 : 9.00 21791.00 85.12 0.00 0.00 0.00 0.00 0.00 00:41:01.698 =================================================================================================================== 00:41:01.698 Total : 21791.00 85.12 0.00 0.00 0.00 0.00 0.00 00:41:01.698 00:41:02.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.641 Nvme0n1 : 10.00 22159.10 86.56 0.00 0.00 0.00 0.00 0.00 00:41:02.641 =================================================================================================================== 00:41:02.641 Total : 22159.10 86.56 0.00 0.00 0.00 0.00 0.00 00:41:02.641 00:41:02.641 00:41:02.641 Latency(us) 00:41:02.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.641 Nvme0n1 : 10.01 22160.62 86.56 0.00 0.00 5772.60 3099.31 31238.83 00:41:02.641 =================================================================================================================== 00:41:02.641 Total : 22160.62 86.56 0.00 0.00 5772.60 3099.31 31238.83 00:41:02.641 { 00:41:02.641 "results": [ 00:41:02.641 { 00:41:02.641 "job": "Nvme0n1", 00:41:02.641 "core_mask": "0x2", 00:41:02.641 "workload": "randwrite", 00:41:02.641 "status": "finished", 00:41:02.641 "queue_depth": 128, 00:41:02.641 "io_size": 4096, 00:41:02.641 "runtime": 10.00509, 00:41:02.641 "iops": 22160.620244295653, 00:41:02.641 "mibps": 86.5649228292799, 00:41:02.641 "io_failed": 0, 00:41:02.641 "io_timeout": 0, 00:41:02.641 "avg_latency_us": 5772.601025261705, 00:41:02.641 "min_latency_us": 3099.306666666667, 00:41:02.641 "max_latency_us": 31238.826666666668 00:41:02.641 } 00:41:02.641 ], 00:41:02.641 "core_count": 1 00:41:02.641 } 00:41:02.641 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3452760 00:41:02.641 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3452760 ']' 00:41:02.641 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3452760 00:41:02.641 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:41:02.641 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:02.641 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3452760 00:41:02.641 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:02.641 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:02.641 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3452760' 00:41:02.641 killing process with pid 3452760 00:41:02.641 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3452760 00:41:02.641 Received shutdown signal, test time was about 10.000000 seconds 00:41:02.641 00:41:02.641 Latency(us) 00:41:02.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.641 =================================================================================================================== 00:41:02.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:02.641 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3452760 00:41:02.902 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:02.902 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:03.163 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:41:03.163 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:03.424 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:03.424 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:03.424 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:03.424 [2024-10-01 15:58:42.811147] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:03.424 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:41:03.424 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:41:03.424 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:03.425 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:41:03.686 request: 00:41:03.686 { 00:41:03.686 "uuid": "7d7dde11-64e3-4749-b5c9-1994930d578f", 00:41:03.686 "method": "bdev_lvol_get_lvstores", 00:41:03.686 "req_id": 1 00:41:03.686 } 00:41:03.686 Got JSON-RPC error response 00:41:03.686 response: 00:41:03.686 { 00:41:03.686 "code": -19, 00:41:03.686 "message": "No such device" 00:41:03.686 } 00:41:03.686 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:41:03.686 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:03.686 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:03.687 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:03.687 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:03.947 aio_bdev 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:03.947 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05 -t 2000 00:41:04.209 [ 00:41:04.209 { 00:41:04.209 "name": "6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05", 00:41:04.209 "aliases": [ 00:41:04.209 "lvs/lvol" 00:41:04.209 ], 00:41:04.209 "product_name": "Logical Volume", 00:41:04.209 "block_size": 4096, 00:41:04.209 "num_blocks": 38912, 00:41:04.209 "uuid": "6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05", 00:41:04.209 "assigned_rate_limits": { 00:41:04.209 "rw_ios_per_sec": 0, 00:41:04.209 "rw_mbytes_per_sec": 0, 00:41:04.209 "r_mbytes_per_sec": 0, 00:41:04.209 "w_mbytes_per_sec": 0 00:41:04.209 }, 00:41:04.209 "claimed": false, 00:41:04.209 "zoned": false, 00:41:04.209 "supported_io_types": { 00:41:04.209 "read": true, 00:41:04.209 "write": true, 00:41:04.209 "unmap": true, 00:41:04.209 "flush": false, 00:41:04.209 "reset": true, 00:41:04.209 "nvme_admin": false, 00:41:04.209 "nvme_io": false, 00:41:04.209 "nvme_io_md": false, 00:41:04.209 "write_zeroes": true, 00:41:04.209 "zcopy": false, 00:41:04.209 "get_zone_info": false, 00:41:04.209 "zone_management": false, 00:41:04.209 "zone_append": false, 00:41:04.209 "compare": false, 00:41:04.209 "compare_and_write": false, 00:41:04.209 "abort": false, 00:41:04.209 "seek_hole": true, 00:41:04.209 "seek_data": true, 00:41:04.209 "copy": false, 00:41:04.209 "nvme_iov_md": false 00:41:04.209 }, 00:41:04.209 "driver_specific": { 00:41:04.209 "lvol": { 00:41:04.209 "lvol_store_uuid": "7d7dde11-64e3-4749-b5c9-1994930d578f", 00:41:04.209 "base_bdev": "aio_bdev", 00:41:04.209 "thin_provision": false, 00:41:04.209 "num_allocated_clusters": 38, 00:41:04.209 "snapshot": false, 00:41:04.209 "clone": false, 00:41:04.209 "esnap_clone": false 00:41:04.209 } 00:41:04.209 } 00:41:04.209 } 00:41:04.209 ] 00:41:04.209 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:41:04.209 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:04.209 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:41:04.470 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:04.470 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:41:04.470 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:04.470 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:04.470 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6a0d34b5-b38f-46ac-b506-b7ae8d1f5a05 00:41:04.730 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d7dde11-64e3-4749-b5c9-1994930d578f 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:04.991 00:41:04.991 real 0m15.078s 00:41:04.991 user 0m14.720s 00:41:04.991 sys 0m1.351s 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:04.991 ************************************ 00:41:04.991 END TEST lvs_grow_clean 00:41:04.991 ************************************ 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:04.991 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:05.251 ************************************ 00:41:05.251 START TEST lvs_grow_dirty 00:41:05.251 ************************************ 00:41:05.251 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:41:05.251 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:05.251 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:05.252 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:05.512 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=050cd599-369f-4443-adcf-9c161491381b 00:41:05.512 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:05.512 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:05.773 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:05.773 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:05.773 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 050cd599-369f-4443-adcf-9c161491381b lvol 150 00:41:05.773 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6149d3c5-6966-470a-abc5-7860b3050219 00:41:05.773 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.773 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:06.033 [2024-10-01 15:58:45.335097] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:06.033 [2024-10-01 15:58:45.335242] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:06.033 true 00:41:06.033 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:06.033 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:06.294 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:06.294 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:06.294 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6149d3c5-6966-470a-abc5-7860b3050219 00:41:06.555 15:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:06.814 [2024-10-01 15:58:46.011587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3455638 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3455638 /var/tmp/bdevperf.sock 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3455638 ']' 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:06.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:06.814 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:06.814 [2024-10-01 15:58:46.213779] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:06.814 [2024-10-01 15:58:46.213831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455638 ] 00:41:06.814 [2024-10-01 15:58:46.244115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:07.074 [2024-10-01 15:58:46.289462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.074 [2024-10-01 15:58:46.317978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.074 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:07.074 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:07.074 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:07.334 Nvme0n1 00:41:07.334 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:07.334 [ 00:41:07.334 { 00:41:07.334 "name": "Nvme0n1", 00:41:07.334 "aliases": [ 00:41:07.334 "6149d3c5-6966-470a-abc5-7860b3050219" 00:41:07.334 ], 00:41:07.334 "product_name": "NVMe disk", 00:41:07.334 "block_size": 4096, 00:41:07.334 "num_blocks": 38912, 00:41:07.334 "uuid": "6149d3c5-6966-470a-abc5-7860b3050219", 00:41:07.334 "numa_id": 0, 00:41:07.334 "assigned_rate_limits": { 00:41:07.334 "rw_ios_per_sec": 0, 00:41:07.334 "rw_mbytes_per_sec": 0, 00:41:07.334 "r_mbytes_per_sec": 0, 00:41:07.334 "w_mbytes_per_sec": 0 00:41:07.334 }, 00:41:07.334 "claimed": false, 00:41:07.334 "zoned": false, 00:41:07.334 "supported_io_types": { 00:41:07.334 "read": true, 00:41:07.334 "write": true, 00:41:07.334 "unmap": true, 00:41:07.334 "flush": true, 00:41:07.334 "reset": true, 00:41:07.334 "nvme_admin": true, 00:41:07.334 "nvme_io": true, 00:41:07.334 "nvme_io_md": false, 00:41:07.334 "write_zeroes": true, 00:41:07.334 "zcopy": false, 00:41:07.334 "get_zone_info": false, 00:41:07.334 "zone_management": false, 00:41:07.334 "zone_append": false, 00:41:07.334 "compare": true, 00:41:07.334 "compare_and_write": true, 00:41:07.334 "abort": true, 00:41:07.334 "seek_hole": false, 00:41:07.334 "seek_data": false, 00:41:07.334 "copy": true, 00:41:07.334 "nvme_iov_md": false 00:41:07.334 }, 00:41:07.334 "memory_domains": [ 00:41:07.334 { 00:41:07.334 "dma_device_id": "system", 00:41:07.334 "dma_device_type": 1 00:41:07.334 } 00:41:07.334 ], 00:41:07.334 "driver_specific": { 00:41:07.334 "nvme": [ 00:41:07.334 { 00:41:07.334 "trid": { 00:41:07.334 "trtype": "TCP", 00:41:07.334 "adrfam": "IPv4", 00:41:07.334 "traddr": "10.0.0.2", 00:41:07.334 "trsvcid": "4420", 00:41:07.334 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:07.334 }, 00:41:07.334 "ctrlr_data": { 00:41:07.334 "cntlid": 1, 00:41:07.334 "vendor_id": "0x8086", 00:41:07.334 "model_number": "SPDK bdev Controller", 00:41:07.334 "serial_number": "SPDK0", 00:41:07.334 "firmware_revision": "25.01", 00:41:07.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.335 "oacs": { 00:41:07.335 "security": 0, 00:41:07.335 "format": 0, 00:41:07.335 "firmware": 0, 00:41:07.335 "ns_manage": 0 00:41:07.335 }, 00:41:07.335 "multi_ctrlr": true, 00:41:07.335 "ana_reporting": false 00:41:07.335 }, 00:41:07.335 "vs": { 00:41:07.335 "nvme_version": "1.3" 00:41:07.335 }, 00:41:07.335 "ns_data": { 00:41:07.335 "id": 1, 00:41:07.335 "can_share": true 00:41:07.335 } 00:41:07.335 } 00:41:07.335 ], 00:41:07.335 "mp_policy": "active_passive" 00:41:07.335 } 00:41:07.335 } 00:41:07.335 ] 00:41:07.594 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3455650 00:41:07.594 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:07.594 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:07.594 Running I/O for 10 seconds... 00:41:08.533 Latency(us) 00:41:08.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:08.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.533 Nvme0n1 : 1.00 24381.00 95.24 0.00 0.00 0.00 0.00 0.00 00:41:08.533 =================================================================================================================== 00:41:08.533 Total : 24381.00 95.24 0.00 0.00 0.00 0.00 0.00 00:41:08.533 00:41:09.474 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 050cd599-369f-4443-adcf-9c161491381b 00:41:09.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.474 Nvme0n1 : 2.00 24831.00 97.00 0.00 0.00 0.00 0.00 0.00 00:41:09.474 =================================================================================================================== 00:41:09.474 Total : 24831.00 97.00 0.00 0.00 0.00 0.00 0.00 00:41:09.474 00:41:09.737 true 00:41:09.737 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:09.737 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:09.737 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:09.737 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:09.737 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3455650 00:41:10.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.680 Nvme0n1 : 3.00 24986.00 97.60 0.00 0.00 0.00 0.00 0.00 00:41:10.680 =================================================================================================================== 00:41:10.680 Total : 24986.00 97.60 0.00 0.00 0.00 0.00 0.00 00:41:10.680 00:41:11.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.621 Nvme0n1 : 4.00 25071.00 97.93 0.00 0.00 0.00 0.00 0.00 00:41:11.621 =================================================================================================================== 00:41:11.621 Total : 25071.00 97.93 0.00 0.00 0.00 0.00 0.00 00:41:11.621 00:41:12.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:12.563 Nvme0n1 : 5.00 25138.80 98.20 0.00 0.00 0.00 0.00 0.00 00:41:12.563 =================================================================================================================== 00:41:12.563 Total : 25138.80 98.20 0.00 0.00 0.00 0.00 0.00 00:41:12.563 00:41:13.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:13.506 Nvme0n1 : 6.00 25183.17 98.37 0.00 0.00 0.00 0.00 0.00 00:41:13.506 =================================================================================================================== 00:41:13.506 Total : 25183.17 98.37 0.00 0.00 0.00 0.00 0.00 00:41:13.506 00:41:14.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.451 Nvme0n1 : 7.00 25224.43 98.53 0.00 0.00 0.00 0.00 0.00 00:41:14.451 =================================================================================================================== 00:41:14.451 Total : 25224.43 98.53 0.00 0.00 0.00 0.00 0.00 00:41:14.451 00:41:15.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:15.834 Nvme0n1 : 8.00 25252.00 98.64 0.00 0.00 0.00 0.00 0.00 00:41:15.834 =================================================================================================================== 00:41:15.834 Total : 25252.00 98.64 0.00 0.00 0.00 0.00 0.00 00:41:15.834 00:41:16.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:16.775 Nvme0n1 : 9.00 25279.44 98.75 0.00 0.00 0.00 0.00 0.00 00:41:16.775 =================================================================================================================== 00:41:16.775 Total : 25279.44 98.75 0.00 0.00 0.00 0.00 0.00 00:41:16.775 00:41:17.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:17.720 Nvme0n1 : 10.00 25298.70 98.82 0.00 0.00 0.00 0.00 0.00 00:41:17.720 =================================================================================================================== 00:41:17.720 Total : 25298.70 98.82 0.00 0.00 0.00 0.00 0.00 00:41:17.720 00:41:17.720 00:41:17.720 Latency(us) 00:41:17.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:17.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:17.720 Nvme0n1 : 10.00 25294.69 98.81 0.00 0.00 5057.20 3741.01 31675.73 00:41:17.720 =================================================================================================================== 00:41:17.720 Total : 25294.69 98.81 0.00 0.00 5057.20 3741.01 31675.73 00:41:17.720 { 00:41:17.720 "results": [ 00:41:17.720 { 00:41:17.720 "job": "Nvme0n1", 00:41:17.720 "core_mask": "0x2", 00:41:17.720 "workload": "randwrite", 00:41:17.720 "status": "finished", 00:41:17.720 "queue_depth": 128, 00:41:17.720 "io_size": 4096, 00:41:17.720 "runtime": 10.004233, 00:41:17.720 "iops": 25294.69275655615, 00:41:17.720 "mibps": 98.80739358029746, 00:41:17.720 "io_failed": 0, 00:41:17.720 "io_timeout": 0, 00:41:17.720 "avg_latency_us": 5057.198896678179, 00:41:17.720 "min_latency_us": 3741.0133333333333, 00:41:17.720 "max_latency_us": 31675.733333333334 00:41:17.720 } 00:41:17.720 ], 00:41:17.720 "core_count": 1 00:41:17.720 } 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3455638 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3455638 ']' 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3455638 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3455638 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3455638' 00:41:17.720 killing process with pid 3455638 00:41:17.720 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3455638 00:41:17.720 Received shutdown signal, test time was about 10.000000 seconds 00:41:17.720 00:41:17.720 Latency(us) 00:41:17.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:17.720 =================================================================================================================== 00:41:17.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:17.720 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3455638 00:41:17.720 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:17.980 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:18.241 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:18.241 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:18.241 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:18.241 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:18.241 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3452184 00:41:18.241 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3452184 00:41:18.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3452184 Killed "${NVMF_APP[@]}" "$@" 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3457669 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3457669 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3457669 ']' 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:18.502 15:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:18.502 [2024-10-01 15:58:57.758504] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:18.502 [2024-10-01 15:58:57.759508] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:18.502 [2024-10-01 15:58:57.759553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:18.502 [2024-10-01 15:58:57.797288] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:18.502 [2024-10-01 15:58:57.842896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.502 [2024-10-01 15:58:57.871189] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:18.502 [2024-10-01 15:58:57.871221] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:18.502 [2024-10-01 15:58:57.871226] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:18.502 [2024-10-01 15:58:57.871231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:18.502 [2024-10-01 15:58:57.871237] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:18.502 [2024-10-01 15:58:57.871257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.502 [2024-10-01 15:58:57.915736] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:18.502 [2024-10-01 15:58:57.915925] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:19.445 [2024-10-01 15:58:58.737209] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:19.445 [2024-10-01 15:58:58.737442] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:19.445 [2024-10-01 15:58:58.737528] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6149d3c5-6966-470a-abc5-7860b3050219 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6149d3c5-6966-470a-abc5-7860b3050219 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:19.445 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:19.712 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6149d3c5-6966-470a-abc5-7860b3050219 -t 2000 00:41:19.712 [ 00:41:19.712 { 00:41:19.712 "name": "6149d3c5-6966-470a-abc5-7860b3050219", 00:41:19.712 "aliases": [ 00:41:19.712 "lvs/lvol" 00:41:19.712 ], 00:41:19.712 "product_name": "Logical Volume", 00:41:19.712 "block_size": 4096, 00:41:19.712 "num_blocks": 38912, 00:41:19.712 "uuid": "6149d3c5-6966-470a-abc5-7860b3050219", 00:41:19.712 "assigned_rate_limits": { 00:41:19.712 "rw_ios_per_sec": 0, 00:41:19.712 "rw_mbytes_per_sec": 0, 00:41:19.712 "r_mbytes_per_sec": 0, 00:41:19.712 "w_mbytes_per_sec": 0 00:41:19.712 }, 00:41:19.712 "claimed": false, 00:41:19.712 "zoned": false, 00:41:19.712 "supported_io_types": { 00:41:19.712 "read": true, 00:41:19.712 "write": true, 00:41:19.712 "unmap": true, 00:41:19.712 "flush": false, 00:41:19.712 "reset": true, 00:41:19.712 "nvme_admin": false, 00:41:19.712 "nvme_io": false, 00:41:19.712 "nvme_io_md": false, 00:41:19.712 "write_zeroes": true, 00:41:19.712 "zcopy": false, 00:41:19.712 "get_zone_info": false, 00:41:19.712 "zone_management": false, 00:41:19.712 "zone_append": false, 00:41:19.712 "compare": false, 00:41:19.712 "compare_and_write": false, 00:41:19.712 "abort": false, 00:41:19.712 "seek_hole": true, 00:41:19.712 "seek_data": true, 00:41:19.712 "copy": false, 00:41:19.712 "nvme_iov_md": false 00:41:19.712 }, 00:41:19.712 "driver_specific": { 00:41:19.712 "lvol": { 00:41:19.712 "lvol_store_uuid": "050cd599-369f-4443-adcf-9c161491381b", 00:41:19.712 "base_bdev": "aio_bdev", 00:41:19.712 "thin_provision": false, 00:41:19.712 "num_allocated_clusters": 38, 00:41:19.712 "snapshot": false, 00:41:19.712 "clone": false, 00:41:19.712 "esnap_clone": false 00:41:19.712 } 00:41:19.712 } 00:41:19.712 } 00:41:19.712 ] 00:41:19.712 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:19.712 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:19.712 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:19.973 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:19.973 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:19.973 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:20.233 [2024-10-01 15:58:59.595744] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:20.233 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:20.494 request: 00:41:20.494 { 00:41:20.494 "uuid": "050cd599-369f-4443-adcf-9c161491381b", 00:41:20.494 "method": "bdev_lvol_get_lvstores", 00:41:20.494 "req_id": 1 00:41:20.494 } 00:41:20.494 Got JSON-RPC error response 00:41:20.494 response: 00:41:20.494 { 00:41:20.494 "code": -19, 00:41:20.494 "message": "No such device" 00:41:20.494 } 00:41:20.494 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:41:20.494 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:20.494 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:20.494 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:20.494 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:20.755 aio_bdev 00:41:20.755 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6149d3c5-6966-470a-abc5-7860b3050219 00:41:20.755 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6149d3c5-6966-470a-abc5-7860b3050219 00:41:20.755 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:20.755 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:20.755 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:20.755 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:20.755 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:20.755 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6149d3c5-6966-470a-abc5-7860b3050219 -t 2000 00:41:21.016 [ 00:41:21.016 { 00:41:21.016 "name": "6149d3c5-6966-470a-abc5-7860b3050219", 00:41:21.016 "aliases": [ 00:41:21.016 "lvs/lvol" 00:41:21.016 ], 00:41:21.016 "product_name": "Logical Volume", 00:41:21.016 "block_size": 4096, 00:41:21.016 "num_blocks": 38912, 00:41:21.016 "uuid": "6149d3c5-6966-470a-abc5-7860b3050219", 00:41:21.016 "assigned_rate_limits": { 00:41:21.016 "rw_ios_per_sec": 0, 00:41:21.016 "rw_mbytes_per_sec": 0, 00:41:21.016 "r_mbytes_per_sec": 0, 00:41:21.016 "w_mbytes_per_sec": 0 00:41:21.016 }, 00:41:21.016 "claimed": false, 00:41:21.016 "zoned": false, 00:41:21.016 "supported_io_types": { 00:41:21.016 "read": true, 00:41:21.016 "write": true, 00:41:21.016 "unmap": true, 00:41:21.016 "flush": false, 00:41:21.016 "reset": true, 00:41:21.016 "nvme_admin": false, 00:41:21.016 "nvme_io": false, 00:41:21.016 "nvme_io_md": false, 00:41:21.016 "write_zeroes": true, 00:41:21.016 "zcopy": false, 00:41:21.016 "get_zone_info": false, 00:41:21.016 "zone_management": false, 00:41:21.016 "zone_append": false, 00:41:21.016 "compare": false, 00:41:21.016 "compare_and_write": false, 00:41:21.016 "abort": false, 00:41:21.016 "seek_hole": true, 00:41:21.016 "seek_data": true, 00:41:21.016 "copy": false, 00:41:21.016 "nvme_iov_md": false 00:41:21.016 }, 00:41:21.016 "driver_specific": { 00:41:21.016 "lvol": { 00:41:21.016 "lvol_store_uuid": "050cd599-369f-4443-adcf-9c161491381b", 00:41:21.016 "base_bdev": "aio_bdev", 00:41:21.016 "thin_provision": false, 00:41:21.016 "num_allocated_clusters": 38, 00:41:21.016 "snapshot": false, 00:41:21.016 "clone": false, 00:41:21.016 "esnap_clone": false 00:41:21.016 } 00:41:21.016 } 00:41:21.016 } 00:41:21.016 ] 00:41:21.016 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:21.016 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:21.016 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:21.277 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:21.277 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 050cd599-369f-4443-adcf-9c161491381b 00:41:21.277 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:21.277 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:21.277 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6149d3c5-6966-470a-abc5-7860b3050219 00:41:21.539 15:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 050cd599-369f-4443-adcf-9c161491381b 00:41:21.799 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:21.799 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:21.799 00:41:21.799 real 0m16.764s 00:41:21.799 user 0m33.315s 00:41:21.799 sys 0m4.238s 00:41:21.799 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:21.799 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:21.799 ************************************ 00:41:21.799 END TEST lvs_grow_dirty 00:41:21.799 ************************************ 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:22.061 nvmf_trace.0 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:22.061 rmmod nvme_tcp 00:41:22.061 rmmod nvme_fabrics 00:41:22.061 rmmod nvme_keyring 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3457669 ']' 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3457669 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3457669 ']' 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3457669 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457669 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457669' 00:41:22.061 killing process with pid 3457669 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3457669 00:41:22.061 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3457669 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:22.323 15:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.240 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:24.240 00:41:24.240 real 0m43.356s 00:41:24.240 user 0m50.979s 00:41:24.240 sys 0m11.852s 00:41:24.240 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:24.240 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:24.240 ************************************ 00:41:24.240 END TEST nvmf_lvs_grow 00:41:24.240 ************************************ 00:41:24.502 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:24.502 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:24.503 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:24.503 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:24.503 ************************************ 00:41:24.503 START TEST nvmf_bdev_io_wait 00:41:24.503 ************************************ 00:41:24.503 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:24.503 * Looking for test storage... 00:41:24.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:24.503 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:24.503 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:41:24.503 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:24.765 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.766 --rc genhtml_branch_coverage=1 00:41:24.766 --rc genhtml_function_coverage=1 00:41:24.766 --rc genhtml_legend=1 00:41:24.766 --rc geninfo_all_blocks=1 00:41:24.766 --rc geninfo_unexecuted_blocks=1 00:41:24.766 00:41:24.766 ' 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.766 --rc genhtml_branch_coverage=1 00:41:24.766 --rc genhtml_function_coverage=1 00:41:24.766 --rc genhtml_legend=1 00:41:24.766 --rc geninfo_all_blocks=1 00:41:24.766 --rc geninfo_unexecuted_blocks=1 00:41:24.766 00:41:24.766 ' 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.766 --rc genhtml_branch_coverage=1 00:41:24.766 --rc genhtml_function_coverage=1 00:41:24.766 --rc genhtml_legend=1 00:41:24.766 --rc geninfo_all_blocks=1 00:41:24.766 --rc geninfo_unexecuted_blocks=1 00:41:24.766 00:41:24.766 ' 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.766 --rc genhtml_branch_coverage=1 00:41:24.766 --rc genhtml_function_coverage=1 00:41:24.766 --rc genhtml_legend=1 00:41:24.766 --rc geninfo_all_blocks=1 00:41:24.766 --rc geninfo_unexecuted_blocks=1 00:41:24.766 00:41:24.766 ' 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:24.766 15:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:24.766 15:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:31.728 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:31.729 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:31.729 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:31.729 Found net devices under 0000:31:00.0: cvl_0_0 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:31.729 Found net devices under 0000:31:00.1: cvl_0_1 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:31.729 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:31.989 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:31.989 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:31.989 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:31.989 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:31.989 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:31.989 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:31.990 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:31.990 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:31.990 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:32.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:32.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:41:32.250 00:41:32.250 --- 10.0.0.2 ping statistics --- 00:41:32.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.250 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:32.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:32.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:41:32.250 00:41:32.250 --- 10.0.0.1 ping statistics --- 00:41:32.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.250 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3462780 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3462780 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3462780 ']' 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:32.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:32.250 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.250 [2024-10-01 15:59:11.572080] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:32.250 [2024-10-01 15:59:11.573069] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:32.250 [2024-10-01 15:59:11.573106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:32.250 [2024-10-01 15:59:11.609424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:32.250 [2024-10-01 15:59:11.657222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:32.250 [2024-10-01 15:59:11.690850] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:32.250 [2024-10-01 15:59:11.690884] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:32.250 [2024-10-01 15:59:11.690892] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:32.250 [2024-10-01 15:59:11.690925] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:32.250 [2024-10-01 15:59:11.690931] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:32.250 [2024-10-01 15:59:11.691002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:32.250 [2024-10-01 15:59:11.691154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:32.250 [2024-10-01 15:59:11.691304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.250 [2024-10-01 15:59:11.691305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:32.250 [2024-10-01 15:59:11.691616] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 [2024-10-01 15:59:12.479722] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:33.193 [2024-10-01 15:59:12.480108] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:33.193 [2024-10-01 15:59:12.480212] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:33.193 [2024-10-01 15:59:12.480416] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 [2024-10-01 15:59:12.492157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 Malloc0 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.193 [2024-10-01 15:59:12.580435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3462833 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3462835 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:33.193 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:33.193 { 00:41:33.193 "params": { 00:41:33.193 "name": "Nvme$subsystem", 00:41:33.193 "trtype": "$TEST_TRANSPORT", 00:41:33.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:33.193 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "$NVMF_PORT", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:33.194 "hdgst": ${hdgst:-false}, 00:41:33.194 "ddgst": ${ddgst:-false} 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 } 00:41:33.194 EOF 00:41:33.194 )") 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3462838 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:33.194 { 00:41:33.194 "params": { 00:41:33.194 "name": "Nvme$subsystem", 00:41:33.194 "trtype": "$TEST_TRANSPORT", 00:41:33.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:33.194 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "$NVMF_PORT", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:33.194 "hdgst": ${hdgst:-false}, 00:41:33.194 "ddgst": ${ddgst:-false} 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 } 00:41:33.194 EOF 00:41:33.194 )") 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3462841 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:33.194 { 00:41:33.194 "params": { 00:41:33.194 "name": "Nvme$subsystem", 00:41:33.194 "trtype": "$TEST_TRANSPORT", 00:41:33.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:33.194 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "$NVMF_PORT", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:33.194 "hdgst": ${hdgst:-false}, 00:41:33.194 "ddgst": ${ddgst:-false} 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 } 00:41:33.194 EOF 00:41:33.194 )") 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:33.194 { 00:41:33.194 "params": { 00:41:33.194 "name": "Nvme$subsystem", 00:41:33.194 "trtype": "$TEST_TRANSPORT", 00:41:33.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:33.194 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "$NVMF_PORT", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:33.194 "hdgst": ${hdgst:-false}, 00:41:33.194 "ddgst": ${ddgst:-false} 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 } 00:41:33.194 EOF 00:41:33.194 )") 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3462833 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:33.194 "params": { 00:41:33.194 "name": "Nvme1", 00:41:33.194 "trtype": "tcp", 00:41:33.194 "traddr": "10.0.0.2", 00:41:33.194 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "4420", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:33.194 "hdgst": false, 00:41:33.194 "ddgst": false 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 }' 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:33.194 "params": { 00:41:33.194 "name": "Nvme1", 00:41:33.194 "trtype": "tcp", 00:41:33.194 "traddr": "10.0.0.2", 00:41:33.194 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "4420", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:33.194 "hdgst": false, 00:41:33.194 "ddgst": false 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 }' 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:33.194 "params": { 00:41:33.194 "name": "Nvme1", 00:41:33.194 "trtype": "tcp", 00:41:33.194 "traddr": "10.0.0.2", 00:41:33.194 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "4420", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:33.194 "hdgst": false, 00:41:33.194 "ddgst": false 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 }' 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:41:33.194 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:33.194 "params": { 00:41:33.194 "name": "Nvme1", 00:41:33.194 "trtype": "tcp", 00:41:33.194 "traddr": "10.0.0.2", 00:41:33.194 "adrfam": "ipv4", 00:41:33.194 "trsvcid": "4420", 00:41:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:33.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:33.194 "hdgst": false, 00:41:33.194 "ddgst": false 00:41:33.194 }, 00:41:33.194 "method": "bdev_nvme_attach_controller" 00:41:33.194 }' 00:41:33.194 [2024-10-01 15:59:12.638744] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:33.194 [2024-10-01 15:59:12.638816] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:33.194 [2024-10-01 15:59:12.640116] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:33.194 [2024-10-01 15:59:12.640116] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:33.194 [2024-10-01 15:59:12.640192] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-10-01 15:59:12.640193] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:41:33.194 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:33.194 [2024-10-01 15:59:12.642577] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:33.194 [2024-10-01 15:59:12.642636] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:33.456 [2024-10-01 15:59:12.782692] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:33.456 [2024-10-01 15:59:12.832591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.456 [2024-10-01 15:59:12.847289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:33.456 [2024-10-01 15:59:12.859562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:41:33.717 [2024-10-01 15:59:12.916085] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:33.717 [2024-10-01 15:59:12.920115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.717 [2024-10-01 15:59:12.947397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:41:33.717 [2024-10-01 15:59:12.966801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.717 [2024-10-01 15:59:12.994530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:41:33.717 [2024-10-01 15:59:13.006478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:33.717 [2024-10-01 15:59:13.057601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.717 [2024-10-01 15:59:13.088824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:41:33.977 Running I/O for 1 seconds... 00:41:33.977 Running I/O for 1 seconds... 00:41:33.977 Running I/O for 1 seconds... 00:41:34.237 Running I/O for 1 seconds... 00:41:34.808 188808.00 IOPS, 737.53 MiB/s 00:41:34.808 Latency(us) 00:41:34.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:34.808 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:34.808 Nvme1n1 : 1.00 188433.48 736.07 0.00 0.00 675.67 308.91 1979.73 00:41:34.808 =================================================================================================================== 00:41:34.808 Total : 188433.48 736.07 0.00 0.00 675.67 308.91 1979.73 00:41:35.069 13954.00 IOPS, 54.51 MiB/s 00:41:35.069 Latency(us) 00:41:35.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:35.069 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:35.069 Nvme1n1 : 1.01 14015.88 54.75 0.00 0.00 9103.39 3140.27 11250.35 00:41:35.069 =================================================================================================================== 00:41:35.069 Total : 14015.88 54.75 0.00 0.00 9103.39 3140.27 11250.35 00:41:35.069 11276.00 IOPS, 44.05 MiB/s 00:41:35.069 Latency(us) 00:41:35.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:35.069 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:35.069 Nvme1n1 : 1.01 11327.14 44.25 0.00 0.00 11259.93 4887.89 15073.28 00:41:35.069 =================================================================================================================== 00:41:35.069 Total : 11327.14 44.25 0.00 0.00 11259.93 4887.89 15073.28 00:41:35.069 11389.00 IOPS, 44.49 MiB/s 00:41:35.069 Latency(us) 00:41:35.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:35.069 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:35.069 Nvme1n1 : 1.01 11465.78 44.79 0.00 0.00 11126.90 2116.27 18131.63 00:41:35.069 =================================================================================================================== 00:41:35.069 Total : 11465.78 44.79 0.00 0.00 11126.90 2116.27 18131.63 00:41:35.331 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3462835 00:41:35.331 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3462838 00:41:35.331 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3462841 00:41:35.331 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:35.331 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.331 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:35.331 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:35.332 rmmod nvme_tcp 00:41:35.332 rmmod nvme_fabrics 00:41:35.332 rmmod nvme_keyring 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3462780 ']' 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3462780 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3462780 ']' 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3462780 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3462780 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3462780' 00:41:35.332 killing process with pid 3462780 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3462780 00:41:35.332 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3462780 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:35.594 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:38.142 00:41:38.142 real 0m13.238s 00:41:38.142 user 0m16.065s 00:41:38.142 sys 0m7.967s 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:38.142 ************************************ 00:41:38.142 END TEST nvmf_bdev_io_wait 00:41:38.142 ************************************ 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:38.142 ************************************ 00:41:38.142 START TEST nvmf_queue_depth 00:41:38.142 ************************************ 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:38.142 * Looking for test storage... 00:41:38.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:38.142 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:38.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.143 --rc genhtml_branch_coverage=1 00:41:38.143 --rc genhtml_function_coverage=1 00:41:38.143 --rc genhtml_legend=1 00:41:38.143 --rc geninfo_all_blocks=1 00:41:38.143 --rc geninfo_unexecuted_blocks=1 00:41:38.143 00:41:38.143 ' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:38.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.143 --rc genhtml_branch_coverage=1 00:41:38.143 --rc genhtml_function_coverage=1 00:41:38.143 --rc genhtml_legend=1 00:41:38.143 --rc geninfo_all_blocks=1 00:41:38.143 --rc geninfo_unexecuted_blocks=1 00:41:38.143 00:41:38.143 ' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:38.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.143 --rc genhtml_branch_coverage=1 00:41:38.143 --rc genhtml_function_coverage=1 00:41:38.143 --rc genhtml_legend=1 00:41:38.143 --rc geninfo_all_blocks=1 00:41:38.143 --rc geninfo_unexecuted_blocks=1 00:41:38.143 00:41:38.143 ' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:38.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.143 --rc genhtml_branch_coverage=1 00:41:38.143 --rc genhtml_function_coverage=1 00:41:38.143 --rc genhtml_legend=1 00:41:38.143 --rc geninfo_all_blocks=1 00:41:38.143 --rc geninfo_unexecuted_blocks=1 00:41:38.143 00:41:38.143 ' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:38.143 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:38.144 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:46.282 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:46.283 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:46.283 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:46.283 Found net devices under 0000:31:00.0: cvl_0_0 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:46.283 Found net devices under 0000:31:00.1: cvl_0_1 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:46.283 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:46.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:46.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:41:46.284 00:41:46.284 --- 10.0.0.2 ping statistics --- 00:41:46.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.284 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:46.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:46.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:41:46.284 00:41:46.284 --- 10.0.0.1 ping statistics --- 00:41:46.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.284 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3467575 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3467575 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3467575 ']' 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:46.284 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 [2024-10-01 15:59:24.770948] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:46.284 [2024-10-01 15:59:24.771924] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:46.284 [2024-10-01 15:59:24.771961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:46.284 [2024-10-01 15:59:24.810340] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:46.284 [2024-10-01 15:59:24.860068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.284 [2024-10-01 15:59:24.890825] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:46.284 [2024-10-01 15:59:24.890861] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:46.284 [2024-10-01 15:59:24.890872] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:46.284 [2024-10-01 15:59:24.890881] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:46.284 [2024-10-01 15:59:24.890888] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:46.284 [2024-10-01 15:59:24.890923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.284 [2024-10-01 15:59:24.938566] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:46.284 [2024-10-01 15:59:24.938845] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 [2024-10-01 15:59:25.619759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 Malloc0 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.284 [2024-10-01 15:59:25.707808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3467701 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:46.284 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:46.285 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3467701 /var/tmp/bdevperf.sock 00:41:46.285 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3467701 ']' 00:41:46.285 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:46.285 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:46.285 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:46.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:46.285 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:46.285 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.546 [2024-10-01 15:59:25.765034] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:41:46.546 [2024-10-01 15:59:25.765100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467701 ] 00:41:46.546 [2024-10-01 15:59:25.799761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:46.546 [2024-10-01 15:59:25.847696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.546 [2024-10-01 15:59:25.895359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.118 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:47.118 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:41:47.118 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:47.118 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.118 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:47.380 NVMe0n1 00:41:47.380 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.380 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:47.642 Running I/O for 10 seconds... 00:41:57.515 9216.00 IOPS, 36.00 MiB/s 9216.00 IOPS, 36.00 MiB/s 9602.33 IOPS, 37.51 MiB/s 10753.00 IOPS, 42.00 MiB/s 11351.40 IOPS, 44.34 MiB/s 11755.33 IOPS, 45.92 MiB/s 12047.14 IOPS, 47.06 MiB/s 12292.50 IOPS, 48.02 MiB/s 12518.78 IOPS, 48.90 MiB/s 12654.60 IOPS, 49.43 MiB/s 00:41:57.515 Latency(us) 00:41:57.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:57.515 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:57.515 Verification LBA range: start 0x0 length 0x4000 00:41:57.515 NVMe0n1 : 10.05 12688.26 49.56 0.00 0.00 80396.37 14854.83 66409.81 00:41:57.515 =================================================================================================================== 00:41:57.515 Total : 12688.26 49.56 0.00 0.00 80396.37 14854.83 66409.81 00:41:57.515 { 00:41:57.515 "results": [ 00:41:57.515 { 00:41:57.515 "job": "NVMe0n1", 00:41:57.515 "core_mask": "0x1", 00:41:57.515 "workload": "verify", 00:41:57.515 "status": "finished", 00:41:57.515 "verify_range": { 00:41:57.515 "start": 0, 00:41:57.515 "length": 16384 00:41:57.515 }, 00:41:57.515 "queue_depth": 1024, 00:41:57.515 "io_size": 4096, 00:41:57.515 "runtime": 10.051493, 00:41:57.515 "iops": 12688.264320534272, 00:41:57.515 "mibps": 49.563532502087, 00:41:57.515 "io_failed": 0, 00:41:57.515 "io_timeout": 0, 00:41:57.515 "avg_latency_us": 80396.37167398486, 00:41:57.515 "min_latency_us": 14854.826666666666, 00:41:57.515 "max_latency_us": 66409.81333333334 00:41:57.515 } 00:41:57.515 ], 00:41:57.515 "core_count": 1 00:41:57.515 } 00:41:57.515 15:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3467701 00:41:57.515 15:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3467701 ']' 00:41:57.515 15:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3467701 00:41:57.515 15:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:57.515 15:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:57.515 15:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467701 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467701' 00:41:57.776 killing process with pid 3467701 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3467701 00:41:57.776 Received shutdown signal, test time was about 10.000000 seconds 00:41:57.776 00:41:57.776 Latency(us) 00:41:57.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:57.776 =================================================================================================================== 00:41:57.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3467701 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:57.776 rmmod nvme_tcp 00:41:57.776 rmmod nvme_fabrics 00:41:57.776 rmmod nvme_keyring 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3467575 ']' 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3467575 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3467575 ']' 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3467575 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:57.776 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467575 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467575' 00:41:58.036 killing process with pid 3467575 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3467575 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3467575 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:58.036 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:58.037 15:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:00.581 00:42:00.581 real 0m22.382s 00:42:00.581 user 0m24.675s 00:42:00.581 sys 0m7.282s 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:00.581 ************************************ 00:42:00.581 END TEST nvmf_queue_depth 00:42:00.581 ************************************ 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:00.581 ************************************ 00:42:00.581 START TEST nvmf_target_multipath 00:42:00.581 ************************************ 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:00.581 * Looking for test storage... 00:42:00.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.581 --rc genhtml_branch_coverage=1 00:42:00.581 --rc genhtml_function_coverage=1 00:42:00.581 --rc genhtml_legend=1 00:42:00.581 --rc geninfo_all_blocks=1 00:42:00.581 --rc geninfo_unexecuted_blocks=1 00:42:00.581 00:42:00.581 ' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.581 --rc genhtml_branch_coverage=1 00:42:00.581 --rc genhtml_function_coverage=1 00:42:00.581 --rc genhtml_legend=1 00:42:00.581 --rc geninfo_all_blocks=1 00:42:00.581 --rc geninfo_unexecuted_blocks=1 00:42:00.581 00:42:00.581 ' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.581 --rc genhtml_branch_coverage=1 00:42:00.581 --rc genhtml_function_coverage=1 00:42:00.581 --rc genhtml_legend=1 00:42:00.581 --rc geninfo_all_blocks=1 00:42:00.581 --rc geninfo_unexecuted_blocks=1 00:42:00.581 00:42:00.581 ' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.581 --rc genhtml_branch_coverage=1 00:42:00.581 --rc genhtml_function_coverage=1 00:42:00.581 --rc genhtml_legend=1 00:42:00.581 --rc geninfo_all_blocks=1 00:42:00.581 --rc geninfo_unexecuted_blocks=1 00:42:00.581 00:42:00.581 ' 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:00.581 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:42:00.582 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:08.723 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:08.723 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:08.723 Found net devices under 0000:31:00.0: cvl_0_0 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:08.723 Found net devices under 0000:31:00.1: cvl_0_1 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:08.723 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:08.724 15:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:08.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:08.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:42:08.724 00:42:08.724 --- 10.0.0.2 ping statistics --- 00:42:08.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.724 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:08.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:08.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:42:08.724 00:42:08.724 --- 10.0.0.1 ping statistics --- 00:42:08.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.724 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:08.724 only one NIC for nvmf test 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:08.724 rmmod nvme_tcp 00:42:08.724 rmmod nvme_fabrics 00:42:08.724 rmmod nvme_keyring 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:08.724 15:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:10.109 00:42:10.109 real 0m9.841s 00:42:10.109 user 0m2.148s 00:42:10.109 sys 0m5.621s 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:10.109 ************************************ 00:42:10.109 END TEST nvmf_target_multipath 00:42:10.109 ************************************ 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:10.109 ************************************ 00:42:10.109 START TEST nvmf_zcopy 00:42:10.109 ************************************ 00:42:10.109 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:10.370 * Looking for test storage... 00:42:10.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.370 --rc genhtml_branch_coverage=1 00:42:10.370 --rc genhtml_function_coverage=1 00:42:10.370 --rc genhtml_legend=1 00:42:10.370 --rc geninfo_all_blocks=1 00:42:10.370 --rc geninfo_unexecuted_blocks=1 00:42:10.370 00:42:10.370 ' 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.370 --rc genhtml_branch_coverage=1 00:42:10.370 --rc genhtml_function_coverage=1 00:42:10.370 --rc genhtml_legend=1 00:42:10.370 --rc geninfo_all_blocks=1 00:42:10.370 --rc geninfo_unexecuted_blocks=1 00:42:10.370 00:42:10.370 ' 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.370 --rc genhtml_branch_coverage=1 00:42:10.370 --rc genhtml_function_coverage=1 00:42:10.370 --rc genhtml_legend=1 00:42:10.370 --rc geninfo_all_blocks=1 00:42:10.370 --rc geninfo_unexecuted_blocks=1 00:42:10.370 00:42:10.370 ' 00:42:10.370 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.370 --rc genhtml_branch_coverage=1 00:42:10.370 --rc genhtml_function_coverage=1 00:42:10.370 --rc genhtml_legend=1 00:42:10.371 --rc geninfo_all_blocks=1 00:42:10.371 --rc geninfo_unexecuted_blocks=1 00:42:10.371 00:42:10.371 ' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:10.371 15:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:18.507 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:18.508 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:18.508 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:18.508 Found net devices under 0000:31:00.0: cvl_0_0 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:18.508 Found net devices under 0000:31:00.1: cvl_0_1 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:18.508 15:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:18.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:18.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:42:18.508 00:42:18.508 --- 10.0.0.2 ping statistics --- 00:42:18.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:18.508 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:18.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:18.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:42:18.508 00:42:18.508 --- 10.0.0.1 ping statistics --- 00:42:18.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:18.508 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3478315 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3478315 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3478315 ']' 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:18.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:18.508 15:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:18.508 [2024-10-01 15:59:57.334347] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:18.508 [2024-10-01 15:59:57.335485] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:42:18.508 [2024-10-01 15:59:57.335540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:18.508 [2024-10-01 15:59:57.376637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:18.508 [2024-10-01 15:59:57.425144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.508 [2024-10-01 15:59:57.471918] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:18.508 [2024-10-01 15:59:57.471969] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:18.508 [2024-10-01 15:59:57.471981] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:18.508 [2024-10-01 15:59:57.471990] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:18.508 [2024-10-01 15:59:57.471999] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:18.508 [2024-10-01 15:59:57.472030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:18.508 [2024-10-01 15:59:57.532875] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:18.508 [2024-10-01 15:59:57.533211] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:18.769 [2024-10-01 15:59:58.204951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:18.769 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:19.029 [2024-10-01 15:59:58.233254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:19.029 malloc0 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:19.029 { 00:42:19.029 "params": { 00:42:19.029 "name": "Nvme$subsystem", 00:42:19.029 "trtype": "$TEST_TRANSPORT", 00:42:19.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:19.029 "adrfam": "ipv4", 00:42:19.029 "trsvcid": "$NVMF_PORT", 00:42:19.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:19.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:19.029 "hdgst": ${hdgst:-false}, 00:42:19.029 "ddgst": ${ddgst:-false} 00:42:19.029 }, 00:42:19.029 "method": "bdev_nvme_attach_controller" 00:42:19.029 } 00:42:19.029 EOF 00:42:19.029 )") 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:42:19.029 15:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:19.029 "params": { 00:42:19.029 "name": "Nvme1", 00:42:19.029 "trtype": "tcp", 00:42:19.029 "traddr": "10.0.0.2", 00:42:19.029 "adrfam": "ipv4", 00:42:19.029 "trsvcid": "4420", 00:42:19.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:19.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:19.029 "hdgst": false, 00:42:19.029 "ddgst": false 00:42:19.029 }, 00:42:19.029 "method": "bdev_nvme_attach_controller" 00:42:19.029 }' 00:42:19.029 [2024-10-01 15:59:58.354883] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:42:19.029 [2024-10-01 15:59:58.354939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478418 ] 00:42:19.029 [2024-10-01 15:59:58.385376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:19.029 [2024-10-01 15:59:58.434303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:19.029 [2024-10-01 15:59:58.466354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.601 Running I/O for 10 seconds... 00:42:29.542 6543.00 IOPS, 51.12 MiB/s 6501.50 IOPS, 50.79 MiB/s 6529.00 IOPS, 51.01 MiB/s 6541.75 IOPS, 51.11 MiB/s 6949.40 IOPS, 54.29 MiB/s 7382.67 IOPS, 57.68 MiB/s 7690.57 IOPS, 60.08 MiB/s 7922.62 IOPS, 61.90 MiB/s 8102.78 IOPS, 63.30 MiB/s 8248.30 IOPS, 64.44 MiB/s 00:42:29.542 Latency(us) 00:42:29.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:29.542 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:29.542 Verification LBA range: start 0x0 length 0x1000 00:42:29.542 Nvme1n1 : 10.01 8252.30 64.47 0.00 0.00 15465.63 2048.00 28180.48 00:42:29.542 =================================================================================================================== 00:42:29.542 Total : 8252.30 64.47 0.00 0.00 15465.63 2048.00 28180.48 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3480672 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:29.542 { 00:42:29.542 "params": { 00:42:29.542 "name": "Nvme$subsystem", 00:42:29.542 "trtype": "$TEST_TRANSPORT", 00:42:29.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:29.542 "adrfam": "ipv4", 00:42:29.542 "trsvcid": "$NVMF_PORT", 00:42:29.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:29.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:29.542 "hdgst": ${hdgst:-false}, 00:42:29.542 "ddgst": ${ddgst:-false} 00:42:29.542 }, 00:42:29.542 "method": "bdev_nvme_attach_controller" 00:42:29.542 } 00:42:29.542 EOF 00:42:29.542 )") 00:42:29.542 [2024-10-01 16:00:08.912486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.542 [2024-10-01 16:00:08.912515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:42:29.542 16:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:29.542 "params": { 00:42:29.542 "name": "Nvme1", 00:42:29.542 "trtype": "tcp", 00:42:29.542 "traddr": "10.0.0.2", 00:42:29.542 "adrfam": "ipv4", 00:42:29.542 "trsvcid": "4420", 00:42:29.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:29.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:29.542 "hdgst": false, 00:42:29.542 "ddgst": false 00:42:29.542 }, 00:42:29.542 "method": "bdev_nvme_attach_controller" 00:42:29.542 }' 00:42:29.542 [2024-10-01 16:00:08.924465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.542 [2024-10-01 16:00:08.924480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.542 [2024-10-01 16:00:08.936455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.542 [2024-10-01 16:00:08.936464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.542 [2024-10-01 16:00:08.948454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.542 [2024-10-01 16:00:08.948463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.542 [2024-10-01 16:00:08.954724] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:42:29.542 [2024-10-01 16:00:08.954773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3480672 ] 00:42:29.542 [2024-10-01 16:00:08.960459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.542 [2024-10-01 16:00:08.960472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.542 [2024-10-01 16:00:08.972454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.542 [2024-10-01 16:00:08.972462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.542 [2024-10-01 16:00:08.984454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.542 [2024-10-01 16:00:08.984462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.542 [2024-10-01 16:00:08.984967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:29.804 [2024-10-01 16:00:08.996454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:08.996463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.008453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.008461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.020454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.020466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.031411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.804 [2024-10-01 16:00:09.032455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.032463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.044455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.044468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.056456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.056471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.059583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.804 [2024-10-01 16:00:09.068454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.068463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.080461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.080479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.092458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.092467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.104456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.104465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.116454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.116462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.128461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.128478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.140457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.140469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.804 [2024-10-01 16:00:09.152456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.804 [2024-10-01 16:00:09.152468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 [2024-10-01 16:00:09.164456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.164468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 [2024-10-01 16:00:09.176461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.176477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 Running I/O for 5 seconds... 00:42:29.805 [2024-10-01 16:00:09.188458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.188472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 [2024-10-01 16:00:09.203596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.203613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 [2024-10-01 16:00:09.216540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.216557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 [2024-10-01 16:00:09.229084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.229100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 [2024-10-01 16:00:09.244020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.244042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.805 [2024-10-01 16:00:09.257022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.805 [2024-10-01 16:00:09.257038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.271738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.271755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.284974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.284990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.299534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.299550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.312054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.312070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.324568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.324584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.337217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.337232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.351507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.351522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.364181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.364197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.376260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.376276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.389124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.389140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.403857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.403873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.416707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.416722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.431551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.431567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.444300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.444316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.457203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.457219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.471306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.471322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.483958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.483974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.496747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.496766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.066 [2024-10-01 16:00:09.511684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.066 [2024-10-01 16:00:09.511700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.524508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.524524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.536411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.536428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.549270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.549286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.563521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.563537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.576467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.576483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.588678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.588694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.603465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.603481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.616558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.616574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.628550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.628566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.640979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.640995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.655529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.655545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.668518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.668534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.681278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.681293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.695694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.695709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.708733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.708749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.723205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.723220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.736010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.736026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.749068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.749083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.763414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.763430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.328 [2024-10-01 16:00:09.777024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.328 [2024-10-01 16:00:09.777040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.791652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.791669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.804825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.804841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.819926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.819942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.833186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.833201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.848209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.848225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.861140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.861156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.875580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.875597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.888120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.888136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.900780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.900795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.915962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.915978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.928929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.928947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.944244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.944261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.957082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.957099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.972119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.972135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:09.985231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:09.985246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:10.000132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:10.000148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:10.012655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:10.012671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:10.027346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:10.027362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.589 [2024-10-01 16:00:10.039993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.589 [2024-10-01 16:00:10.040009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.052719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.052736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.065208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.065223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.080270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.080286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.093324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.093340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.107924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.107940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.120793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.120809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.135937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.135954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.148843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.148858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.163912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.163929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.176984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.177000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.191354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.191370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 18815.00 IOPS, 146.99 MiB/s [2024-10-01 16:00:10.204558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.204575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.217236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.217251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.232236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.232252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.244950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.244965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.259552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.259573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.272503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.272519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.285074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.285089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.851 [2024-10-01 16:00:10.299874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.851 [2024-10-01 16:00:10.299890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.112 [2024-10-01 16:00:10.312560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.112 [2024-10-01 16:00:10.312576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.112 [2024-10-01 16:00:10.325244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.112 [2024-10-01 16:00:10.325260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.112 [2024-10-01 16:00:10.339871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.112 [2024-10-01 16:00:10.339887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.352862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.352876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.367957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.367973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.380810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.380825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.395879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.395899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.408811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.408825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.423898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.423915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.436632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.436648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.449209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.449224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.464163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.464180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.477014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.477029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.491733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.491749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.504376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.504392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.517426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.517447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.531420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.531435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.544291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.544307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.113 [2024-10-01 16:00:10.556968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.113 [2024-10-01 16:00:10.556984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.374 [2024-10-01 16:00:10.571951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.374 [2024-10-01 16:00:10.571968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.374 [2024-10-01 16:00:10.584666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.374 [2024-10-01 16:00:10.584681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.374 [2024-10-01 16:00:10.599506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.374 [2024-10-01 16:00:10.599522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.374 [2024-10-01 16:00:10.611975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.374 [2024-10-01 16:00:10.611991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.624948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.624963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.639123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.639139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.652151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.652167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.664669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.664685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.679754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.679769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.692712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.692727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.707392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.707407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.720643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.720659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.733467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.733482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.747811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.747827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.760688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.760704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.773610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.773630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.787689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.787705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.800954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.800969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.815924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.815939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.375 [2024-10-01 16:00:10.828586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.375 [2024-10-01 16:00:10.828601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.841125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.841141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.855928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.855944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.868858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.868874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.883557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.883573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.896593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.896609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.908841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.908857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.923585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.923600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.936674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.936692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.951369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.951386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.964018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.964033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.976658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.976673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:10.991714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:10.991730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:11.004946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:11.004961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:11.019940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:11.019956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:11.032233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:11.032253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:11.045024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:11.045039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:11.060182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:11.060198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:11.072739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:11.072755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.636 [2024-10-01 16:00:11.087291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.636 [2024-10-01 16:00:11.087307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.896 [2024-10-01 16:00:11.100474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.100491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.113163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.113178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.127766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.127782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.141028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.141043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.155874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.155891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.168476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.168492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.180737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.180752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 18843.00 IOPS, 147.21 MiB/s [2024-10-01 16:00:11.195562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.195578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.208289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.208305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.221070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.221085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.236298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.236317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.249083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.249099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.263463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.263479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.276110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.276126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.288815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.288830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.303961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.303977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.317085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.317100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.331129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.331144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.897 [2024-10-01 16:00:11.344362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.897 [2024-10-01 16:00:11.344378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.157 [2024-10-01 16:00:11.357302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.157 [2024-10-01 16:00:11.357318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.157 [2024-10-01 16:00:11.371872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.157 [2024-10-01 16:00:11.371888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.157 [2024-10-01 16:00:11.385185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.157 [2024-10-01 16:00:11.385200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.400071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.400087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.412904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.412920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.427532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.427548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.440027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.440043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.452740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.452755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.467667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.467683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.480328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.480344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.492736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.492752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.507477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.507493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.520344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.520359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.533364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.533380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.547758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.547774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.561274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.561289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.575276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.575292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.587968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.587984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.158 [2024-10-01 16:00:11.600496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.158 [2024-10-01 16:00:11.600513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.612768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.612783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.627975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.627991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.640765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.640781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.655517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.655533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.668712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.668728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.683729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.683744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.697017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.697032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.711654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.711670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.724455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.724471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.736162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.736178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.749342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.749357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.764316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.764332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.776935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.776950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.791251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.791267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.804323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.804339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.816938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.816953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.831684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.831701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.844749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.844765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.859574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.859590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.420 [2024-10-01 16:00:11.872259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.420 [2024-10-01 16:00:11.872275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.884736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.884752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.899639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.899655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.912433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.912449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.925239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.925254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.940301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.940319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.953014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.953030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.967432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.967448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.980581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.980597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:11.993249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:11.993265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.007584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.007600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.020600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.020615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.033254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.033270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.048035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.048054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.060979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.060994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.075595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.075611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.088687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.088702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.101466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.101481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.115078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.115094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.681 [2024-10-01 16:00:12.128535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.681 [2024-10-01 16:00:12.128550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.140986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.141001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.155539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.155554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.168169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.168185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.180985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.181000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.195773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.195789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 18863.67 IOPS, 147.37 MiB/s [2024-10-01 16:00:12.209083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.209098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.223798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.223814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.236829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.236844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.251957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.251972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.265255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.265271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.280313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.280329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.293417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.293433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.307718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.307738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.321040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.321056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.335685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.335701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.348845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.348860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.363438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.363453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.376576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.376591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.943 [2024-10-01 16:00:12.388812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.943 [2024-10-01 16:00:12.388828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.403790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.403806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.416983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.416999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.431844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.431860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.444910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.444925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.459654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.459669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.472546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.472561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.484384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.484399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.497313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.497328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.511487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.511502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.524184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.524200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.536589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.536605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.548295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.548310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.561075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.561093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.576133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.576149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.588866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.588880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.604172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.604187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.616659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.616674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.628687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.628702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.641123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.641138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.204 [2024-10-01 16:00:12.655818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.204 [2024-10-01 16:00:12.655834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.668411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.668427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.680605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.680621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.693536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.693551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.707349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.707365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.720525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.720540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.732910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.732926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.747653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.747668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.760737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.760752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.776110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.776125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.788771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.788787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.803932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.803948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.816456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.816472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.829438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.829453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.843461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.843477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.856400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.856417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.868399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.868415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.881146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.881161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.895763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.895779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.465 [2024-10-01 16:00:12.908746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.465 [2024-10-01 16:00:12.908761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:12.923940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:12.923956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:12.936754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:12.936769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:12.951970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:12.951987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:12.964677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:12.964693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:12.976962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:12.976977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:12.991797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:12.991812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.004588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.004604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.016358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.016374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.029258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.029273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.043792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.043808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.056673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.056689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.069186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.069202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.083659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.083675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.096646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.096661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.112135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.112151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.125035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.125051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.139682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.139698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.152960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.152975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.167922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.167938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.727 [2024-10-01 16:00:13.180753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.727 [2024-10-01 16:00:13.180769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.988 [2024-10-01 16:00:13.195855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.988 [2024-10-01 16:00:13.195871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.988 18849.00 IOPS, 147.26 MiB/s [2024-10-01 16:00:13.208422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.988 [2024-10-01 16:00:13.208439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.988 [2024-10-01 16:00:13.221329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.988 [2024-10-01 16:00:13.221345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.988 [2024-10-01 16:00:13.235994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.988 [2024-10-01 16:00:13.236010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.988 [2024-10-01 16:00:13.249499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.988 [2024-10-01 16:00:13.249514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.988 [2024-10-01 16:00:13.263440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.988 [2024-10-01 16:00:13.263456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.988 [2024-10-01 16:00:13.276300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.988 [2024-10-01 16:00:13.276316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.289354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.289369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.303478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.303493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.316004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.316024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.328452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.328467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.340873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.340889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.355581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.355596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.368817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.368833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.384181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.384197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.396377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.396392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.409159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.409174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.423842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.423858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.989 [2024-10-01 16:00:13.436905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.989 [2024-10-01 16:00:13.436920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.451275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.451291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.464455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.464469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.476675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.476692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.488852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.488866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.504019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.504035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.516467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.516483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.529176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.529191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.543959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.543975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.556756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.556772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.572110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.572130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.585191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.585207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.600149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.600165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.613015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.613030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.627593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.627608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.640380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.640396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.652738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.652753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.667735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.667750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.680909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.680925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.250 [2024-10-01 16:00:13.695569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.250 [2024-10-01 16:00:13.695584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.708708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.708723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.723347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.723362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.736040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.736056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.748628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.748644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.761333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.761349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.775898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.775913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.789116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.789132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.804277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.804293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.816992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.817008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.831682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.831704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.844688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.844703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.859438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.859454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.872689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.872704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.887369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.887384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.900236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.900251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.912801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.912815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.927776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.927792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.940862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.940877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.511 [2024-10-01 16:00:13.955679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.511 [2024-10-01 16:00:13.955695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:13.968972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:13.968988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:13.983561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:13.983577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:13.996817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:13.996832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.011250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.011266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.023975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.023991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.036926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.036941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.051706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.051721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.065075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.065090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.079433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.079449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.092346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.092366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.104573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.104588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.119809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.119825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.132582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.132597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.145146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.145162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.159826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.159842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.172473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.172488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.184232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.184247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.196964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.196979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 18852.00 IOPS, 147.28 MiB/s 00:42:34.772 Latency(us) 00:42:34.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:34.772 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:34.772 Nvme1n1 : 5.00 18860.13 147.34 0.00 0.00 6781.40 2594.13 11414.19 00:42:34.772 =================================================================================================================== 00:42:34.772 Total : 18860.13 147.34 0.00 0.00 6781.40 2594.13 11414.19 00:42:34.772 [2024-10-01 16:00:14.208460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.208474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.772 [2024-10-01 16:00:14.220458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.772 [2024-10-01 16:00:14.220472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.232465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.232477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.244462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.244477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.256459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.256470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.268457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.268467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.280457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.280466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.292459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.292470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.304456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.304466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 [2024-10-01 16:00:14.316454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.033 [2024-10-01 16:00:14.316462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3480672) - No such process 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3480672 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:35.033 delay0 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.033 16:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:35.033 [2024-10-01 16:00:14.468335] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:41.616 Initializing NVMe Controllers 00:42:41.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:41.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:41.616 Initialization complete. Launching workers. 00:42:41.616 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2252 00:42:41.616 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2534, failed to submit 38 00:42:41.616 success 2354, unsuccessful 180, failed 0 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:41.616 rmmod nvme_tcp 00:42:41.616 rmmod nvme_fabrics 00:42:41.616 rmmod nvme_keyring 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3478315 ']' 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3478315 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3478315 ']' 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3478315 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:41.616 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3478315 00:42:41.616 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:41.616 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:41.616 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3478315' 00:42:41.616 killing process with pid 3478315 00:42:41.616 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3478315 00:42:41.616 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3478315 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:41.877 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:43.787 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:43.787 00:42:43.787 real 0m33.737s 00:42:43.787 user 0m42.493s 00:42:43.787 sys 0m12.448s 00:42:43.787 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:43.787 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:43.787 ************************************ 00:42:43.787 END TEST nvmf_zcopy 00:42:43.787 ************************************ 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:44.046 ************************************ 00:42:44.046 START TEST nvmf_nmic 00:42:44.046 ************************************ 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:44.046 * Looking for test storage... 00:42:44.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:44.046 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.047 --rc genhtml_branch_coverage=1 00:42:44.047 --rc genhtml_function_coverage=1 00:42:44.047 --rc genhtml_legend=1 00:42:44.047 --rc geninfo_all_blocks=1 00:42:44.047 --rc geninfo_unexecuted_blocks=1 00:42:44.047 00:42:44.047 ' 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.047 --rc genhtml_branch_coverage=1 00:42:44.047 --rc genhtml_function_coverage=1 00:42:44.047 --rc genhtml_legend=1 00:42:44.047 --rc geninfo_all_blocks=1 00:42:44.047 --rc geninfo_unexecuted_blocks=1 00:42:44.047 00:42:44.047 ' 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.047 --rc genhtml_branch_coverage=1 00:42:44.047 --rc genhtml_function_coverage=1 00:42:44.047 --rc genhtml_legend=1 00:42:44.047 --rc geninfo_all_blocks=1 00:42:44.047 --rc geninfo_unexecuted_blocks=1 00:42:44.047 00:42:44.047 ' 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.047 --rc genhtml_branch_coverage=1 00:42:44.047 --rc genhtml_function_coverage=1 00:42:44.047 --rc genhtml_legend=1 00:42:44.047 --rc geninfo_all_blocks=1 00:42:44.047 --rc geninfo_unexecuted_blocks=1 00:42:44.047 00:42:44.047 ' 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:44.047 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:44.307 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:44.308 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:44.308 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:44.308 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:44.308 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:44.308 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:44.308 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:44.308 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:52.448 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:52.449 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:52.449 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:52.449 Found net devices under 0000:31:00.0: cvl_0_0 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:52.449 Found net devices under 0000:31:00.1: cvl_0_1 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:52.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:52.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:42:52.449 00:42:52.449 --- 10.0.0.2 ping statistics --- 00:42:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:52.449 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:52.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:52.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:42:52.449 00:42:52.449 --- 10.0.0.1 ping statistics --- 00:42:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:52.449 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3487379 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3487379 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3487379 ']' 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:52.449 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.450 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:52.450 16:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.450 [2024-10-01 16:00:31.007860] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:52.450 [2024-10-01 16:00:31.008845] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:42:52.450 [2024-10-01 16:00:31.008884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:52.450 [2024-10-01 16:00:31.045466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:52.450 [2024-10-01 16:00:31.092947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:52.450 [2024-10-01 16:00:31.126777] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:52.450 [2024-10-01 16:00:31.126813] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:52.450 [2024-10-01 16:00:31.126821] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:52.450 [2024-10-01 16:00:31.126828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:52.450 [2024-10-01 16:00:31.126834] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:52.450 [2024-10-01 16:00:31.126934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:52.450 [2024-10-01 16:00:31.127024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:52.450 [2024-10-01 16:00:31.127140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.450 [2024-10-01 16:00:31.127141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:52.450 [2024-10-01 16:00:31.183314] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:52.450 [2024-10-01 16:00:31.184629] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:52.450 [2024-10-01 16:00:31.184919] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:52.450 [2024-10-01 16:00:31.185541] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:52.450 [2024-10-01 16:00:31.185590] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.450 [2024-10-01 16:00:31.856089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.450 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 Malloc0 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 [2024-10-01 16:00:31.940425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:52.712 test case1: single bdev can't be used in multiple subsystems 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 [2024-10-01 16:00:31.975679] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:52.712 [2024-10-01 16:00:31.975706] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:52.712 [2024-10-01 16:00:31.975716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:52.712 request: 00:42:52.712 { 00:42:52.712 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:52.712 "namespace": { 00:42:52.712 "bdev_name": "Malloc0", 00:42:52.712 "no_auto_visible": false 00:42:52.712 }, 00:42:52.712 "method": "nvmf_subsystem_add_ns", 00:42:52.712 "req_id": 1 00:42:52.712 } 00:42:52.712 Got JSON-RPC error response 00:42:52.712 response: 00:42:52.712 { 00:42:52.712 "code": -32602, 00:42:52.712 "message": "Invalid parameters" 00:42:52.712 } 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:52.712 Adding namespace failed - expected result. 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:52.712 test case2: host connect to nvmf target in multiple paths 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.712 [2024-10-01 16:00:31.987821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.712 16:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:52.973 16:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:53.543 16:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:53.543 16:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:42:53.543 16:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:53.543 16:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:53.543 16:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:42:55.457 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:55.457 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:55.457 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:55.457 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:55.457 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:55.457 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:42:55.457 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:55.457 [global] 00:42:55.457 thread=1 00:42:55.457 invalidate=1 00:42:55.457 rw=write 00:42:55.457 time_based=1 00:42:55.457 runtime=1 00:42:55.457 ioengine=libaio 00:42:55.457 direct=1 00:42:55.457 bs=4096 00:42:55.457 iodepth=1 00:42:55.457 norandommap=0 00:42:55.457 numjobs=1 00:42:55.457 00:42:55.457 verify_dump=1 00:42:55.457 verify_backlog=512 00:42:55.457 verify_state_save=0 00:42:55.457 do_verify=1 00:42:55.457 verify=crc32c-intel 00:42:55.457 [job0] 00:42:55.457 filename=/dev/nvme0n1 00:42:55.457 Could not set queue depth (nvme0n1) 00:42:55.718 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:55.718 fio-3.35 00:42:55.718 Starting 1 thread 00:42:57.103 00:42:57.104 job0: (groupid=0, jobs=1): err= 0: pid=3488301: Tue Oct 1 16:00:36 2024 00:42:57.104 read: IOPS=17, BW=69.8KiB/s (71.5kB/s)(72.0KiB/1031msec) 00:42:57.104 slat (nsec): min=24853, max=25971, avg=25218.28, stdev=335.38 00:42:57.104 clat (usec): min=907, max=42962, avg=39835.90, stdev=9733.34 00:42:57.104 lat (usec): min=932, max=42987, avg=39861.12, stdev=9733.25 00:42:57.104 clat percentiles (usec): 00:42:57.104 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[41157], 20.00th=[42206], 00:42:57.104 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:42:57.104 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:42:57.104 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:57.104 | 99.99th=[42730] 00:42:57.104 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:42:57.104 slat (nsec): min=9732, max=64509, avg=27501.59, stdev=10065.50 00:42:57.104 clat (usec): min=174, max=855, avg=577.74, stdev=99.89 00:42:57.104 lat (usec): min=184, max=866, avg=605.24, stdev=104.04 00:42:57.104 clat percentiles (usec): 00:42:57.104 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 453], 20.00th=[ 494], 00:42:57.104 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 594], 00:42:57.104 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 725], 00:42:57.104 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 857], 99.95th=[ 857], 00:42:57.104 | 99.99th=[ 857] 00:42:57.104 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:57.104 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:57.104 lat (usec) : 250=0.57%, 500=20.57%, 750=72.83%, 1000=2.83% 00:42:57.104 lat (msec) : 50=3.21% 00:42:57.104 cpu : usr=0.49%, sys=1.55%, ctx=530, majf=0, minf=1 00:42:57.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.104 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:57.104 00:42:57.104 Run status group 0 (all jobs): 00:42:57.104 READ: bw=69.8KiB/s (71.5kB/s), 69.8KiB/s-69.8KiB/s (71.5kB/s-71.5kB/s), io=72.0KiB (73.7kB), run=1031-1031msec 00:42:57.104 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:42:57.104 00:42:57.104 Disk stats (read/write): 00:42:57.104 nvme0n1: ios=64/512, merge=0/0, ticks=617/292, in_queue=909, util=93.69% 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:57.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:57.104 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:57.365 rmmod nvme_tcp 00:42:57.365 rmmod nvme_fabrics 00:42:57.365 rmmod nvme_keyring 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3487379 ']' 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3487379 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3487379 ']' 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3487379 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3487379 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3487379' 00:42:57.365 killing process with pid 3487379 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3487379 00:42:57.365 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3487379 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:57.626 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:59.541 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:59.541 00:42:59.541 real 0m15.613s 00:42:59.541 user 0m38.651s 00:42:59.541 sys 0m7.391s 00:42:59.541 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:59.541 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:59.541 ************************************ 00:42:59.541 END TEST nvmf_nmic 00:42:59.541 ************************************ 00:42:59.541 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:59.541 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:59.541 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:59.541 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:59.802 ************************************ 00:42:59.802 START TEST nvmf_fio_target 00:42:59.802 ************************************ 00:42:59.802 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:59.802 * Looking for test storage... 00:42:59.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:59.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.802 --rc genhtml_branch_coverage=1 00:42:59.802 --rc genhtml_function_coverage=1 00:42:59.802 --rc genhtml_legend=1 00:42:59.802 --rc geninfo_all_blocks=1 00:42:59.802 --rc geninfo_unexecuted_blocks=1 00:42:59.802 00:42:59.802 ' 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:59.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.802 --rc genhtml_branch_coverage=1 00:42:59.802 --rc genhtml_function_coverage=1 00:42:59.802 --rc genhtml_legend=1 00:42:59.802 --rc geninfo_all_blocks=1 00:42:59.802 --rc geninfo_unexecuted_blocks=1 00:42:59.802 00:42:59.802 ' 00:42:59.802 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:59.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.802 --rc genhtml_branch_coverage=1 00:42:59.802 --rc genhtml_function_coverage=1 00:42:59.802 --rc genhtml_legend=1 00:42:59.802 --rc geninfo_all_blocks=1 00:42:59.802 --rc geninfo_unexecuted_blocks=1 00:42:59.802 00:42:59.802 ' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.803 --rc genhtml_branch_coverage=1 00:42:59.803 --rc genhtml_function_coverage=1 00:42:59.803 --rc genhtml_legend=1 00:42:59.803 --rc geninfo_all_blocks=1 00:42:59.803 --rc geninfo_unexecuted_blocks=1 00:42:59.803 00:42:59.803 ' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:59.803 16:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:07.944 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:07.945 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:07.945 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:07.945 Found net devices under 0000:31:00.0: cvl_0_0 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:07.945 Found net devices under 0000:31:00.1: cvl_0_1 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:07.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:07.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:43:07.945 00:43:07.945 --- 10.0.0.2 ping statistics --- 00:43:07.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.945 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:07.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:07.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:43:07.945 00:43:07.945 --- 10.0.0.1 ping statistics --- 00:43:07.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.945 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:07.945 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3492869 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3492869 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3492869 ']' 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:07.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:07.946 16:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.946 [2024-10-01 16:00:46.681766] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:07.946 [2024-10-01 16:00:46.682739] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:43:07.946 [2024-10-01 16:00:46.682777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:07.946 [2024-10-01 16:00:46.719497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:07.946 [2024-10-01 16:00:46.765674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:07.946 [2024-10-01 16:00:46.798367] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:07.946 [2024-10-01 16:00:46.798404] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:07.946 [2024-10-01 16:00:46.798415] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:07.946 [2024-10-01 16:00:46.798424] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:07.946 [2024-10-01 16:00:46.798431] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:07.946 [2024-10-01 16:00:46.798576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:07.946 [2024-10-01 16:00:46.798731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:07.946 [2024-10-01 16:00:46.798884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:07.946 [2024-10-01 16:00:46.798884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.946 [2024-10-01 16:00:46.861297] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:07.946 [2024-10-01 16:00:46.862572] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:07.946 [2024-10-01 16:00:46.862782] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:07.946 [2024-10-01 16:00:46.863410] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:07.946 [2024-10-01 16:00:46.863456] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:08.207 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:08.207 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:43:08.207 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:08.207 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:08.207 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:08.207 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:08.207 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:08.467 [2024-10-01 16:00:47.675830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:08.467 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:08.729 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:08.729 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:08.729 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:08.729 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:09.031 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:09.031 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:09.292 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:09.292 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:09.292 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:09.552 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:09.552 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:09.812 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:09.812 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:09.812 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:09.812 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:10.073 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:10.334 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:10.334 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:10.334 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:10.334 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:10.595 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:10.856 [2024-10-01 16:00:50.091914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:10.856 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:11.117 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:11.117 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:11.688 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:11.688 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:43:11.688 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:11.688 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:43:11.688 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:43:11.688 16:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:43:13.603 16:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:13.603 16:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:13.603 16:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:13.603 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:43:13.603 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:13.603 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:43:13.603 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:13.603 [global] 00:43:13.603 thread=1 00:43:13.603 invalidate=1 00:43:13.603 rw=write 00:43:13.603 time_based=1 00:43:13.603 runtime=1 00:43:13.603 ioengine=libaio 00:43:13.603 direct=1 00:43:13.603 bs=4096 00:43:13.603 iodepth=1 00:43:13.603 norandommap=0 00:43:13.603 numjobs=1 00:43:13.603 00:43:13.603 verify_dump=1 00:43:13.603 verify_backlog=512 00:43:13.603 verify_state_save=0 00:43:13.603 do_verify=1 00:43:13.603 verify=crc32c-intel 00:43:13.603 [job0] 00:43:13.603 filename=/dev/nvme0n1 00:43:13.603 [job1] 00:43:13.603 filename=/dev/nvme0n2 00:43:13.603 [job2] 00:43:13.603 filename=/dev/nvme0n3 00:43:13.603 [job3] 00:43:13.603 filename=/dev/nvme0n4 00:43:13.885 Could not set queue depth (nvme0n1) 00:43:13.885 Could not set queue depth (nvme0n2) 00:43:13.885 Could not set queue depth (nvme0n3) 00:43:13.885 Could not set queue depth (nvme0n4) 00:43:14.151 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.151 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.151 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.151 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.151 fio-3.35 00:43:14.151 Starting 4 threads 00:43:15.551 00:43:15.551 job0: (groupid=0, jobs=1): err= 0: pid=3494240: Tue Oct 1 16:00:54 2024 00:43:15.551 read: IOPS=640, BW=2561KiB/s (2623kB/s)(2564KiB/1001msec) 00:43:15.551 slat (nsec): min=6796, max=44068, avg=22756.16, stdev=7849.74 00:43:15.551 clat (usec): min=543, max=939, avg=776.28, stdev=60.17 00:43:15.551 lat (usec): min=553, max=965, avg=799.04, stdev=62.35 00:43:15.551 clat percentiles (usec): 00:43:15.551 | 1.00th=[ 603], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 725], 00:43:15.551 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 799], 00:43:15.551 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 857], 00:43:15.551 | 99.00th=[ 898], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 938], 00:43:15.551 | 99.99th=[ 938] 00:43:15.551 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:15.551 slat (usec): min=9, max=111, avg=27.77, stdev=10.73 00:43:15.551 clat (usec): min=200, max=709, avg=437.72, stdev=77.32 00:43:15.551 lat (usec): min=233, max=743, avg=465.48, stdev=82.06 00:43:15.551 clat percentiles (usec): 00:43:15.551 | 1.00th=[ 265], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 363], 00:43:15.551 | 30.00th=[ 408], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 461], 00:43:15.551 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 562], 00:43:15.551 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 701], 99.95th=[ 709], 00:43:15.551 | 99.99th=[ 709] 00:43:15.551 bw ( KiB/s): min= 4096, max= 4096, per=33.70%, avg=4096.00, stdev= 0.00, samples=1 00:43:15.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:15.551 lat (usec) : 250=0.30%, 500=51.05%, 750=20.06%, 1000=28.59% 00:43:15.551 cpu : usr=1.70%, sys=4.90%, ctx=1666, majf=0, minf=1 00:43:15.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.551 issued rwts: total=641,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.551 job1: (groupid=0, jobs=1): err= 0: pid=3494241: Tue Oct 1 16:00:54 2024 00:43:15.551 read: IOPS=18, BW=75.2KiB/s (77.0kB/s)(76.0KiB/1011msec) 00:43:15.551 slat (nsec): min=25815, max=30166, avg=26435.42, stdev=939.78 00:43:15.551 clat (usec): min=40828, max=41172, avg=40970.94, stdev=92.96 00:43:15.551 lat (usec): min=40855, max=41198, avg=40997.37, stdev=93.05 00:43:15.551 clat percentiles (usec): 00:43:15.551 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:43:15.551 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:15.551 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:15.551 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:15.551 | 99.99th=[41157] 00:43:15.551 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:43:15.551 slat (nsec): min=9391, max=67733, avg=23631.16, stdev=12214.05 00:43:15.551 clat (usec): min=109, max=976, avg=424.23, stdev=147.62 00:43:15.551 lat (usec): min=119, max=1010, avg=447.86, stdev=153.23 00:43:15.551 clat percentiles (usec): 00:43:15.551 | 1.00th=[ 118], 5.00th=[ 125], 10.00th=[ 258], 20.00th=[ 326], 00:43:15.551 | 30.00th=[ 359], 40.00th=[ 400], 50.00th=[ 445], 60.00th=[ 465], 00:43:15.551 | 70.00th=[ 490], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[ 668], 00:43:15.551 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 979], 99.95th=[ 979], 00:43:15.551 | 99.99th=[ 979] 00:43:15.551 bw ( KiB/s): min= 4096, max= 4096, per=33.70%, avg=4096.00, stdev= 0.00, samples=1 00:43:15.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:15.551 lat (usec) : 250=9.42%, 500=61.21%, 750=24.11%, 1000=1.69% 00:43:15.551 lat (msec) : 50=3.58% 00:43:15.551 cpu : usr=0.40%, sys=1.39%, ctx=531, majf=0, minf=2 00:43:15.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.551 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.551 job2: (groupid=0, jobs=1): err= 0: pid=3494246: Tue Oct 1 16:00:54 2024 00:43:15.551 read: IOPS=650, BW=2601KiB/s (2664kB/s)(2604KiB/1001msec) 00:43:15.551 slat (nsec): min=7148, max=45800, avg=22963.20, stdev=8385.18 00:43:15.551 clat (usec): min=458, max=982, avg=785.22, stdev=75.84 00:43:15.551 lat (usec): min=485, max=991, avg=808.18, stdev=78.29 00:43:15.551 clat percentiles (usec): 00:43:15.551 | 1.00th=[ 570], 5.00th=[ 644], 10.00th=[ 685], 20.00th=[ 717], 00:43:15.551 | 30.00th=[ 758], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 816], 00:43:15.551 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 889], 00:43:15.551 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:43:15.551 | 99.99th=[ 979] 00:43:15.551 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:15.551 slat (nsec): min=9968, max=53356, avg=29218.16, stdev=10638.01 00:43:15.551 clat (usec): min=149, max=753, avg=422.84, stdev=77.99 00:43:15.551 lat (usec): min=159, max=788, avg=452.06, stdev=81.31 00:43:15.551 clat percentiles (usec): 00:43:15.551 | 1.00th=[ 233], 5.00th=[ 285], 10.00th=[ 322], 20.00th=[ 351], 00:43:15.551 | 30.00th=[ 375], 40.00th=[ 420], 50.00th=[ 441], 60.00th=[ 453], 00:43:15.551 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 529], 00:43:15.551 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 668], 99.95th=[ 750], 00:43:15.551 | 99.99th=[ 750] 00:43:15.551 bw ( KiB/s): min= 4096, max= 4096, per=33.70%, avg=4096.00, stdev= 0.00, samples=1 00:43:15.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:15.551 lat (usec) : 250=1.19%, 500=52.36%, 750=18.99%, 1000=27.46% 00:43:15.551 cpu : usr=2.80%, sys=4.10%, ctx=1679, majf=0, minf=1 00:43:15.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.551 issued rwts: total=651,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.552 job3: (groupid=0, jobs=1): err= 0: pid=3494250: Tue Oct 1 16:00:54 2024 00:43:15.552 read: IOPS=15, BW=63.6KiB/s (65.1kB/s)(64.0KiB/1006msec) 00:43:15.552 slat (nsec): min=24703, max=25386, avg=25101.62, stdev=195.42 00:43:15.552 clat (usec): min=41080, max=42054, avg=41907.54, stdev=227.57 00:43:15.552 lat (usec): min=41105, max=42080, avg=41932.64, stdev=227.59 00:43:15.552 clat percentiles (usec): 00:43:15.552 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:43:15.552 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:15.552 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:15.552 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:15.552 | 99.99th=[42206] 00:43:15.552 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:43:15.552 slat (nsec): min=9929, max=68019, avg=28439.21, stdev=9485.35 00:43:15.552 clat (usec): min=250, max=913, avg=619.44, stdev=122.88 00:43:15.552 lat (usec): min=260, max=944, avg=647.87, stdev=127.36 00:43:15.552 clat percentiles (usec): 00:43:15.552 | 1.00th=[ 343], 5.00th=[ 416], 10.00th=[ 461], 20.00th=[ 494], 00:43:15.552 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:43:15.552 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:43:15.552 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 914], 99.95th=[ 914], 00:43:15.552 | 99.99th=[ 914] 00:43:15.552 bw ( KiB/s): min= 4096, max= 4096, per=33.70%, avg=4096.00, stdev= 0.00, samples=1 00:43:15.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:15.552 lat (usec) : 500=21.40%, 750=61.55%, 1000=14.02% 00:43:15.552 lat (msec) : 50=3.03% 00:43:15.552 cpu : usr=0.30%, sys=1.79%, ctx=528, majf=0, minf=1 00:43:15.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.552 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.552 00:43:15.552 Run status group 0 (all jobs): 00:43:15.552 READ: bw=5250KiB/s (5376kB/s), 63.6KiB/s-2601KiB/s (65.1kB/s-2664kB/s), io=5308KiB (5435kB), run=1001-1011msec 00:43:15.552 WRITE: bw=11.9MiB/s (12.4MB/s), 2026KiB/s-4092KiB/s (2074kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1011msec 00:43:15.552 00:43:15.552 Disk stats (read/write): 00:43:15.552 nvme0n1: ios=562/887, merge=0/0, ticks=529/379, in_queue=908, util=91.38% 00:43:15.552 nvme0n2: ios=52/512, merge=0/0, ticks=634/207, in_queue=841, util=87.84% 00:43:15.552 nvme0n3: ios=534/900, merge=0/0, ticks=1315/370, in_queue=1685, util=96.72% 00:43:15.552 nvme0n4: ios=11/512, merge=0/0, ticks=461/315, in_queue=776, util=89.38% 00:43:15.552 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:15.552 [global] 00:43:15.552 thread=1 00:43:15.552 invalidate=1 00:43:15.552 rw=randwrite 00:43:15.552 time_based=1 00:43:15.552 runtime=1 00:43:15.552 ioengine=libaio 00:43:15.552 direct=1 00:43:15.552 bs=4096 00:43:15.552 iodepth=1 00:43:15.552 norandommap=0 00:43:15.552 numjobs=1 00:43:15.552 00:43:15.552 verify_dump=1 00:43:15.552 verify_backlog=512 00:43:15.552 verify_state_save=0 00:43:15.552 do_verify=1 00:43:15.552 verify=crc32c-intel 00:43:15.552 [job0] 00:43:15.552 filename=/dev/nvme0n1 00:43:15.552 [job1] 00:43:15.552 filename=/dev/nvme0n2 00:43:15.552 [job2] 00:43:15.552 filename=/dev/nvme0n3 00:43:15.552 [job3] 00:43:15.552 filename=/dev/nvme0n4 00:43:15.552 Could not set queue depth (nvme0n1) 00:43:15.552 Could not set queue depth (nvme0n2) 00:43:15.552 Could not set queue depth (nvme0n3) 00:43:15.552 Could not set queue depth (nvme0n4) 00:43:15.816 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.816 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.816 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.816 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.816 fio-3.35 00:43:15.816 Starting 4 threads 00:43:17.198 00:43:17.198 job0: (groupid=0, jobs=1): err= 0: pid=3494763: Tue Oct 1 16:00:56 2024 00:43:17.198 read: IOPS=18, BW=75.0KiB/s (76.8kB/s)(76.0KiB/1013msec) 00:43:17.198 slat (nsec): min=25566, max=26033, avg=25750.11, stdev=100.80 00:43:17.198 clat (usec): min=40766, max=42040, avg=41083.22, stdev=320.76 00:43:17.198 lat (usec): min=40792, max=42066, avg=41108.97, stdev=320.76 00:43:17.198 clat percentiles (usec): 00:43:17.198 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:17.198 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:17.198 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:43:17.198 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:17.198 | 99.99th=[42206] 00:43:17.198 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:43:17.198 slat (nsec): min=9258, max=69473, avg=25152.77, stdev=10928.09 00:43:17.198 clat (usec): min=205, max=635, avg=421.78, stdev=79.53 00:43:17.198 lat (usec): min=215, max=652, avg=446.93, stdev=85.33 00:43:17.198 clat percentiles (usec): 00:43:17.198 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 347], 00:43:17.198 | 30.00th=[ 375], 40.00th=[ 400], 50.00th=[ 429], 60.00th=[ 453], 00:43:17.198 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 537], 00:43:17.198 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 635], 99.95th=[ 635], 00:43:17.198 | 99.99th=[ 635] 00:43:17.198 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.198 lat (usec) : 250=0.56%, 500=79.85%, 750=16.01% 00:43:17.198 lat (msec) : 50=3.58% 00:43:17.198 cpu : usr=0.59%, sys=1.38%, ctx=531, majf=0, minf=1 00:43:17.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.198 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.198 job1: (groupid=0, jobs=1): err= 0: pid=3494764: Tue Oct 1 16:00:56 2024 00:43:17.198 read: IOPS=689, BW=2757KiB/s (2823kB/s)(2760KiB/1001msec) 00:43:17.198 slat (nsec): min=6677, max=58450, avg=22450.95, stdev=8136.79 00:43:17.198 clat (usec): min=362, max=1031, avg=755.23, stdev=85.88 00:43:17.198 lat (usec): min=369, max=1056, avg=777.68, stdev=88.65 00:43:17.198 clat percentiles (usec): 00:43:17.198 | 1.00th=[ 486], 5.00th=[ 594], 10.00th=[ 652], 20.00th=[ 685], 00:43:17.198 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 791], 00:43:17.198 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 865], 00:43:17.198 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 1029], 99.95th=[ 1029], 00:43:17.198 | 99.99th=[ 1029] 00:43:17.198 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:17.198 slat (nsec): min=9167, max=64225, avg=25504.69, stdev=10249.99 00:43:17.198 clat (usec): min=188, max=611, avg=416.14, stdev=68.84 00:43:17.198 lat (usec): min=199, max=651, avg=441.64, stdev=74.14 00:43:17.198 clat percentiles (usec): 00:43:17.198 | 1.00th=[ 253], 5.00th=[ 285], 10.00th=[ 322], 20.00th=[ 347], 00:43:17.198 | 30.00th=[ 379], 40.00th=[ 416], 50.00th=[ 433], 60.00th=[ 449], 00:43:17.198 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 490], 95.00th=[ 502], 00:43:17.198 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 611], 99.95th=[ 611], 00:43:17.198 | 99.99th=[ 611] 00:43:17.198 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.198 lat (usec) : 250=0.53%, 500=56.18%, 750=16.16%, 1000=27.07% 00:43:17.198 lat (msec) : 2=0.06% 00:43:17.198 cpu : usr=1.70%, sys=4.90%, ctx=1714, majf=0, minf=1 00:43:17.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.199 issued rwts: total=690,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.199 job2: (groupid=0, jobs=1): err= 0: pid=3494765: Tue Oct 1 16:00:56 2024 00:43:17.199 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1023msec) 00:43:17.199 slat (nsec): min=25354, max=26286, avg=25662.47, stdev=220.54 00:43:17.199 clat (usec): min=1270, max=42121, avg=39512.79, stdev=9857.59 00:43:17.199 lat (usec): min=1295, max=42147, avg=39538.45, stdev=9857.61 00:43:17.199 clat percentiles (usec): 00:43:17.199 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[41157], 20.00th=[41681], 00:43:17.199 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:17.199 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:17.199 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:17.199 | 99.99th=[42206] 00:43:17.199 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:43:17.199 slat (nsec): min=9622, max=51295, avg=29647.29, stdev=7400.81 00:43:17.199 clat (usec): min=289, max=1005, avg=647.97, stdev=120.70 00:43:17.199 lat (usec): min=300, max=1036, avg=677.61, stdev=123.24 00:43:17.199 clat percentiles (usec): 00:43:17.199 | 1.00th=[ 371], 5.00th=[ 445], 10.00th=[ 482], 20.00th=[ 545], 00:43:17.199 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:43:17.199 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 832], 00:43:17.199 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 1004], 99.95th=[ 1004], 00:43:17.199 | 99.99th=[ 1004] 00:43:17.199 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.199 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.199 lat (usec) : 500=12.85%, 750=64.27%, 1000=19.47% 00:43:17.199 lat (msec) : 2=0.38%, 50=3.02% 00:43:17.199 cpu : usr=1.57%, sys=0.78%, ctx=529, majf=0, minf=1 00:43:17.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.199 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.199 job3: (groupid=0, jobs=1): err= 0: pid=3494766: Tue Oct 1 16:00:56 2024 00:43:17.199 read: IOPS=16, BW=65.5KiB/s (67.1kB/s)(68.0KiB/1038msec) 00:43:17.199 slat (nsec): min=24957, max=25954, avg=25223.59, stdev=223.15 00:43:17.199 clat (usec): min=41108, max=42041, avg=41915.80, stdev=211.11 00:43:17.199 lat (usec): min=41134, max=42067, avg=41941.02, stdev=211.14 00:43:17.199 clat percentiles (usec): 00:43:17.199 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:43:17.199 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:43:17.199 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:17.199 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:17.199 | 99.99th=[42206] 00:43:17.199 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:43:17.199 slat (nsec): min=9753, max=62111, avg=29147.52, stdev=7909.07 00:43:17.199 clat (usec): min=165, max=984, avg=598.25, stdev=120.24 00:43:17.199 lat (usec): min=176, max=1015, avg=627.40, stdev=122.26 00:43:17.199 clat percentiles (usec): 00:43:17.199 | 1.00th=[ 330], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 498], 00:43:17.199 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:43:17.199 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 791], 00:43:17.199 | 99.00th=[ 857], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 988], 00:43:17.199 | 99.99th=[ 988] 00:43:17.199 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.199 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.199 lat (usec) : 250=0.38%, 500=19.66%, 750=67.11%, 1000=9.64% 00:43:17.199 lat (msec) : 50=3.21% 00:43:17.199 cpu : usr=0.77%, sys=1.45%, ctx=529, majf=0, minf=1 00:43:17.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.199 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.199 00:43:17.199 Run status group 0 (all jobs): 00:43:17.199 READ: bw=2863KiB/s (2932kB/s), 65.5KiB/s-2757KiB/s (67.1kB/s-2823kB/s), io=2972KiB (3043kB), run=1001-1038msec 00:43:17.199 WRITE: bw=9865KiB/s (10.1MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1038msec 00:43:17.199 00:43:17.199 Disk stats (read/write): 00:43:17.199 nvme0n1: ios=64/512, merge=0/0, ticks=625/215, in_queue=840, util=87.37% 00:43:17.199 nvme0n2: ios=551/966, merge=0/0, ticks=449/402, in_queue=851, util=88.69% 00:43:17.199 nvme0n3: ios=12/512, merge=0/0, ticks=462/320, in_queue=782, util=88.40% 00:43:17.199 nvme0n4: ios=12/512, merge=0/0, ticks=503/294, in_queue=797, util=89.53% 00:43:17.199 16:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:17.199 [global] 00:43:17.199 thread=1 00:43:17.199 invalidate=1 00:43:17.199 rw=write 00:43:17.199 time_based=1 00:43:17.199 runtime=1 00:43:17.199 ioengine=libaio 00:43:17.199 direct=1 00:43:17.199 bs=4096 00:43:17.199 iodepth=128 00:43:17.199 norandommap=0 00:43:17.199 numjobs=1 00:43:17.199 00:43:17.199 verify_dump=1 00:43:17.199 verify_backlog=512 00:43:17.199 verify_state_save=0 00:43:17.199 do_verify=1 00:43:17.199 verify=crc32c-intel 00:43:17.199 [job0] 00:43:17.199 filename=/dev/nvme0n1 00:43:17.199 [job1] 00:43:17.199 filename=/dev/nvme0n2 00:43:17.199 [job2] 00:43:17.199 filename=/dev/nvme0n3 00:43:17.199 [job3] 00:43:17.199 filename=/dev/nvme0n4 00:43:17.199 Could not set queue depth (nvme0n1) 00:43:17.199 Could not set queue depth (nvme0n2) 00:43:17.199 Could not set queue depth (nvme0n3) 00:43:17.199 Could not set queue depth (nvme0n4) 00:43:17.469 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:17.469 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:17.469 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:17.469 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:17.469 fio-3.35 00:43:17.469 Starting 4 threads 00:43:18.870 00:43:18.870 job0: (groupid=0, jobs=1): err= 0: pid=3495288: Tue Oct 1 16:00:58 2024 00:43:18.870 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:43:18.870 slat (nsec): min=936, max=9079.0k, avg=68953.41, stdev=490080.97 00:43:18.870 clat (usec): min=2675, max=23685, avg=9452.29, stdev=2834.64 00:43:18.870 lat (usec): min=2684, max=23687, avg=9521.24, stdev=2858.31 00:43:18.870 clat percentiles (usec): 00:43:18.870 | 1.00th=[ 4178], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7242], 00:43:18.870 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9503], 00:43:18.870 | 70.00th=[10683], 80.00th=[11469], 90.00th=[13042], 95.00th=[14877], 00:43:18.870 | 99.00th=[18220], 99.50th=[20317], 99.90th=[23462], 99.95th=[23725], 00:43:18.870 | 99.99th=[23725] 00:43:18.870 write: IOPS=6773, BW=26.5MiB/s (27.7MB/s)(26.6MiB/1006msec); 0 zone resets 00:43:18.870 slat (nsec): min=1586, max=13585k, avg=68942.07, stdev=427110.02 00:43:18.870 clat (usec): min=603, max=23685, avg=9502.31, stdev=4473.91 00:43:18.870 lat (usec): min=965, max=23688, avg=9571.25, stdev=4498.74 00:43:18.870 clat percentiles (usec): 00:43:18.870 | 1.00th=[ 1958], 5.00th=[ 3851], 10.00th=[ 5080], 20.00th=[ 5997], 00:43:18.870 | 30.00th=[ 6783], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 9503], 00:43:18.870 | 70.00th=[10683], 80.00th=[12387], 90.00th=[16581], 95.00th=[19268], 00:43:18.870 | 99.00th=[21890], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:43:18.870 | 99.99th=[23725] 00:43:18.870 bw ( KiB/s): min=26288, max=27240, per=26.52%, avg=26764.00, stdev=673.17, samples=2 00:43:18.870 iops : min= 6572, max= 6810, avg=6691.00, stdev=168.29, samples=2 00:43:18.870 lat (usec) : 750=0.01%, 1000=0.08% 00:43:18.870 lat (msec) : 2=0.53%, 4=2.55%, 10=59.62%, 20=35.30%, 50=1.92% 00:43:18.870 cpu : usr=4.88%, sys=7.06%, ctx=587, majf=0, minf=1 00:43:18.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:18.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:18.871 issued rwts: total=6656,6814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:18.871 job1: (groupid=0, jobs=1): err= 0: pid=3495289: Tue Oct 1 16:00:58 2024 00:43:18.871 read: IOPS=4778, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1004msec) 00:43:18.871 slat (nsec): min=898, max=10821k, avg=112363.40, stdev=695333.62 00:43:18.871 clat (usec): min=2677, max=43675, avg=14240.61, stdev=8723.80 00:43:18.871 lat (usec): min=4814, max=43704, avg=14352.97, stdev=8790.42 00:43:18.871 clat percentiles (usec): 00:43:18.871 | 1.00th=[ 5407], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 7701], 00:43:18.871 | 30.00th=[ 8160], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[12387], 00:43:18.871 | 70.00th=[15008], 80.00th=[19792], 90.00th=[29754], 95.00th=[33162], 00:43:18.871 | 99.00th=[39584], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:43:18.871 | 99.99th=[43779] 00:43:18.871 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:43:18.871 slat (nsec): min=1554, max=13165k, avg=85091.86, stdev=573429.43 00:43:18.871 clat (usec): min=1463, max=46143, avg=11481.70, stdev=6779.62 00:43:18.871 lat (usec): min=1473, max=46176, avg=11566.79, stdev=6827.74 00:43:18.871 clat percentiles (usec): 00:43:18.871 | 1.00th=[ 4178], 5.00th=[ 5211], 10.00th=[ 6456], 20.00th=[ 7177], 00:43:18.871 | 30.00th=[ 7373], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[10683], 00:43:18.871 | 70.00th=[12125], 80.00th=[14222], 90.00th=[18220], 95.00th=[28181], 00:43:18.871 | 99.00th=[36439], 99.50th=[38011], 99.90th=[38011], 99.95th=[40109], 00:43:18.871 | 99.99th=[46400] 00:43:18.871 bw ( KiB/s): min=16384, max=24576, per=20.29%, avg=20480.00, stdev=5792.62, samples=2 00:43:18.871 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:43:18.871 lat (msec) : 2=0.15%, 4=0.21%, 10=50.70%, 20=34.81%, 50=14.14% 00:43:18.871 cpu : usr=2.09%, sys=5.18%, ctx=489, majf=0, minf=2 00:43:18.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:18.871 issued rwts: total=4798,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:18.871 job2: (groupid=0, jobs=1): err= 0: pid=3495290: Tue Oct 1 16:00:58 2024 00:43:18.871 read: IOPS=5955, BW=23.3MiB/s (24.4MB/s)(23.8MiB/1021msec) 00:43:18.871 slat (nsec): min=927, max=8566.4k, avg=84738.44, stdev=484201.43 00:43:18.871 clat (usec): min=2595, max=46667, avg=11300.05, stdev=5449.80 00:43:18.871 lat (usec): min=2597, max=49131, avg=11384.79, stdev=5479.45 00:43:18.871 clat percentiles (usec): 00:43:18.871 | 1.00th=[ 4293], 5.00th=[ 6128], 10.00th=[ 7373], 20.00th=[ 7963], 00:43:18.871 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10290], 00:43:18.871 | 70.00th=[11863], 80.00th=[13698], 90.00th=[18482], 95.00th=[21365], 00:43:18.871 | 99.00th=[35390], 99.50th=[38011], 99.90th=[38536], 99.95th=[46400], 00:43:18.871 | 99.99th=[46924] 00:43:18.871 write: IOPS=6017, BW=23.5MiB/s (24.6MB/s)(24.0MiB/1021msec); 0 zone resets 00:43:18.871 slat (nsec): min=1560, max=8811.7k, avg=73091.89, stdev=426620.44 00:43:18.871 clat (usec): min=1493, max=32893, avg=9859.09, stdev=3515.58 00:43:18.871 lat (usec): min=1496, max=32903, avg=9932.18, stdev=3549.21 00:43:18.871 clat percentiles (usec): 00:43:18.871 | 1.00th=[ 4113], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 8225], 00:43:18.871 | 30.00th=[ 8455], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:43:18.871 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[13566], 95.00th=[17695], 00:43:18.871 | 99.00th=[23987], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:43:18.871 | 99.99th=[32900] 00:43:18.871 bw ( KiB/s): min=24576, max=24576, per=24.35%, avg=24576.00, stdev= 0.00, samples=2 00:43:18.871 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:43:18.871 lat (msec) : 2=0.18%, 4=0.70%, 10=64.48%, 20=30.02%, 50=4.62% 00:43:18.871 cpu : usr=3.33%, sys=4.12%, ctx=631, majf=0, minf=1 00:43:18.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:18.871 issued rwts: total=6081,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:18.871 job3: (groupid=0, jobs=1): err= 0: pid=3495291: Tue Oct 1 16:00:58 2024 00:43:18.871 read: IOPS=7363, BW=28.8MiB/s (30.2MB/s)(29.0MiB/1007msec) 00:43:18.871 slat (nsec): min=939, max=7285.2k, avg=59213.62, stdev=461333.70 00:43:18.871 clat (usec): min=3154, max=20810, avg=8603.83, stdev=2186.95 00:43:18.871 lat (usec): min=3156, max=20814, avg=8663.04, stdev=2216.80 00:43:18.871 clat percentiles (usec): 00:43:18.871 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6849], 00:43:18.871 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8586], 00:43:18.871 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11731], 95.00th=[13042], 00:43:18.871 | 99.00th=[14746], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:43:18.871 | 99.99th=[20841] 00:43:18.871 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:43:18.871 slat (nsec): min=1609, max=11390k, avg=59808.10, stdev=436647.92 00:43:18.871 clat (usec): min=586, max=41328, avg=8329.75, stdev=3734.18 00:43:18.871 lat (usec): min=596, max=41330, avg=8389.55, stdev=3746.09 00:43:18.871 clat percentiles (usec): 00:43:18.871 | 1.00th=[ 3195], 5.00th=[ 4359], 10.00th=[ 5145], 20.00th=[ 5997], 00:43:18.871 | 30.00th=[ 6783], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8225], 00:43:18.871 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[11207], 95.00th=[14877], 00:43:18.871 | 99.00th=[20055], 99.50th=[32113], 99.90th=[41157], 99.95th=[41157], 00:43:18.871 | 99.99th=[41157] 00:43:18.871 bw ( KiB/s): min=29368, max=31832, per=30.32%, avg=30600.00, stdev=1742.31, samples=2 00:43:18.871 iops : min= 7342, max= 7958, avg=7650.00, stdev=435.58, samples=2 00:43:18.871 lat (usec) : 750=0.02%, 1000=0.01% 00:43:18.871 lat (msec) : 2=0.06%, 4=1.35%, 10=78.01%, 20=20.03%, 50=0.52% 00:43:18.871 cpu : usr=5.27%, sys=7.65%, ctx=486, majf=0, minf=1 00:43:18.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:18.871 issued rwts: total=7415,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:18.871 00:43:18.871 Run status group 0 (all jobs): 00:43:18.871 READ: bw=95.5MiB/s (100MB/s), 18.7MiB/s-28.8MiB/s (19.6MB/s-30.2MB/s), io=97.5MiB (102MB), run=1004-1021msec 00:43:18.871 WRITE: bw=98.5MiB/s (103MB/s), 19.9MiB/s-29.8MiB/s (20.9MB/s-31.2MB/s), io=101MiB (106MB), run=1004-1021msec 00:43:18.871 00:43:18.871 Disk stats (read/write): 00:43:18.871 nvme0n1: ios=5273/5632, merge=0/0, ticks=46395/50420, in_queue=96815, util=87.47% 00:43:18.871 nvme0n2: ios=4135/4557, merge=0/0, ticks=21829/20904, in_queue=42733, util=92.46% 00:43:18.871 nvme0n3: ios=4871/5120, merge=0/0, ticks=21829/18947, in_queue=40776, util=95.24% 00:43:18.871 nvme0n4: ios=6184/6231, merge=0/0, ticks=50473/47044, in_queue=97517, util=91.76% 00:43:18.871 16:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:18.871 [global] 00:43:18.871 thread=1 00:43:18.871 invalidate=1 00:43:18.871 rw=randwrite 00:43:18.871 time_based=1 00:43:18.871 runtime=1 00:43:18.871 ioengine=libaio 00:43:18.871 direct=1 00:43:18.871 bs=4096 00:43:18.871 iodepth=128 00:43:18.871 norandommap=0 00:43:18.871 numjobs=1 00:43:18.871 00:43:18.871 verify_dump=1 00:43:18.871 verify_backlog=512 00:43:18.871 verify_state_save=0 00:43:18.871 do_verify=1 00:43:18.871 verify=crc32c-intel 00:43:18.871 [job0] 00:43:18.871 filename=/dev/nvme0n1 00:43:18.871 [job1] 00:43:18.871 filename=/dev/nvme0n2 00:43:18.871 [job2] 00:43:18.871 filename=/dev/nvme0n3 00:43:18.871 [job3] 00:43:18.871 filename=/dev/nvme0n4 00:43:18.871 Could not set queue depth (nvme0n1) 00:43:18.871 Could not set queue depth (nvme0n2) 00:43:18.871 Could not set queue depth (nvme0n3) 00:43:18.871 Could not set queue depth (nvme0n4) 00:43:19.132 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.132 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.132 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.132 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.132 fio-3.35 00:43:19.132 Starting 4 threads 00:43:20.519 00:43:20.519 job0: (groupid=0, jobs=1): err= 0: pid=3495814: Tue Oct 1 16:00:59 2024 00:43:20.519 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:43:20.519 slat (nsec): min=915, max=6956.1k, avg=61582.90, stdev=440622.53 00:43:20.519 clat (usec): min=2907, max=23276, avg=8020.04, stdev=1929.30 00:43:20.519 lat (usec): min=2910, max=23278, avg=8081.63, stdev=1956.94 00:43:20.519 clat percentiles (usec): 00:43:20.519 | 1.00th=[ 4146], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6587], 00:43:20.519 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7701], 60.00th=[ 8160], 00:43:20.519 | 70.00th=[ 8586], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[11731], 00:43:20.519 | 99.00th=[13304], 99.50th=[13960], 99.90th=[17957], 99.95th=[17957], 00:43:20.519 | 99.99th=[23200] 00:43:20.519 write: IOPS=8185, BW=32.0MiB/s (33.5MB/s)(32.1MiB/1004msec); 0 zone resets 00:43:20.519 slat (nsec): min=1545, max=6805.1k, avg=54221.63, stdev=379892.15 00:43:20.519 clat (usec): min=572, max=40354, avg=7505.00, stdev=2786.52 00:43:20.519 lat (usec): min=580, max=40356, avg=7559.22, stdev=2792.83 00:43:20.519 clat percentiles (usec): 00:43:20.519 | 1.00th=[ 3654], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5800], 00:43:20.519 | 30.00th=[ 6325], 40.00th=[ 6980], 50.00th=[ 7373], 60.00th=[ 7701], 00:43:20.519 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[10028], 95.00th=[11076], 00:43:20.519 | 99.00th=[15139], 99.50th=[22152], 99.90th=[38011], 99.95th=[40109], 00:43:20.519 | 99.99th=[40109] 00:43:20.519 bw ( KiB/s): min=32720, max=32816, per=28.52%, avg=32768.00, stdev=67.88, samples=2 00:43:20.519 iops : min= 8180, max= 8204, avg=8192.00, stdev=16.97, samples=2 00:43:20.519 lat (usec) : 750=0.02% 00:43:20.519 lat (msec) : 2=0.05%, 4=1.17%, 10=86.70%, 20=11.78%, 50=0.29% 00:43:20.519 cpu : usr=5.98%, sys=7.88%, ctx=574, majf=0, minf=2 00:43:20.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:20.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.519 issued rwts: total=8192,8218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.519 job1: (groupid=0, jobs=1): err= 0: pid=3495815: Tue Oct 1 16:00:59 2024 00:43:20.519 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:43:20.519 slat (nsec): min=884, max=14467k, avg=66876.22, stdev=455821.57 00:43:20.519 clat (usec): min=725, max=30388, avg=9319.42, stdev=2990.86 00:43:20.519 lat (usec): min=731, max=30392, avg=9386.29, stdev=3014.99 00:43:20.519 clat percentiles (usec): 00:43:20.519 | 1.00th=[ 3326], 5.00th=[ 5800], 10.00th=[ 6783], 20.00th=[ 7439], 00:43:20.519 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:43:20.519 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[12125], 95.00th=[14484], 00:43:20.519 | 99.00th=[22938], 99.50th=[23462], 99.90th=[27919], 99.95th=[28181], 00:43:20.519 | 99.99th=[30278] 00:43:20.520 write: IOPS=7280, BW=28.4MiB/s (29.8MB/s)(28.6MiB/1004msec); 0 zone resets 00:43:20.520 slat (nsec): min=1496, max=14269k, avg=57327.79, stdev=412810.14 00:43:20.520 clat (usec): min=760, max=22868, avg=8279.29, stdev=2592.96 00:43:20.520 lat (usec): min=967, max=27789, avg=8336.62, stdev=2621.97 00:43:20.520 clat percentiles (usec): 00:43:20.520 | 1.00th=[ 1303], 5.00th=[ 4047], 10.00th=[ 5080], 20.00th=[ 6652], 00:43:20.520 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8848], 00:43:20.520 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[13435], 00:43:20.520 | 99.00th=[16450], 99.50th=[16450], 99.90th=[17171], 99.95th=[17171], 00:43:20.520 | 99.99th=[22938] 00:43:20.520 bw ( KiB/s): min=27336, max=30120, per=25.00%, avg=28728.00, stdev=1968.59, samples=2 00:43:20.520 iops : min= 6834, max= 7530, avg=7182.00, stdev=492.15, samples=2 00:43:20.520 lat (usec) : 750=0.02%, 1000=0.03% 00:43:20.520 lat (msec) : 2=0.85%, 4=2.38%, 10=78.91%, 20=16.88%, 50=0.93% 00:43:20.520 cpu : usr=5.18%, sys=5.78%, ctx=592, majf=0, minf=1 00:43:20.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.520 issued rwts: total=7168,7310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.520 job2: (groupid=0, jobs=1): err= 0: pid=3495816: Tue Oct 1 16:00:59 2024 00:43:20.520 read: IOPS=6893, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1004msec) 00:43:20.520 slat (nsec): min=986, max=8592.7k, avg=73526.55, stdev=538640.40 00:43:20.520 clat (usec): min=1705, max=18935, avg=9516.59, stdev=2355.30 00:43:20.520 lat (usec): min=2929, max=18940, avg=9590.12, stdev=2380.05 00:43:20.520 clat percentiles (usec): 00:43:20.520 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7373], 00:43:20.520 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:43:20.520 | 70.00th=[10421], 80.00th=[11731], 90.00th=[12649], 95.00th=[13566], 00:43:20.520 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:43:20.520 | 99.99th=[19006] 00:43:20.520 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:43:20.520 slat (nsec): min=1614, max=9110.2k, avg=63700.66, stdev=444210.19 00:43:20.520 clat (usec): min=1127, max=17486, avg=8583.10, stdev=1987.27 00:43:20.520 lat (usec): min=1136, max=17488, avg=8646.80, stdev=1995.02 00:43:20.520 clat percentiles (usec): 00:43:20.520 | 1.00th=[ 3851], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6915], 00:43:20.520 | 30.00th=[ 7570], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9110], 00:43:20.520 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11338], 95.00th=[11994], 00:43:20.520 | 99.00th=[13042], 99.50th=[13304], 99.90th=[16319], 99.95th=[16909], 00:43:20.520 | 99.99th=[17433] 00:43:20.520 bw ( KiB/s): min=28672, max=28672, per=24.95%, avg=28672.00, stdev= 0.00, samples=2 00:43:20.520 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:43:20.520 lat (msec) : 2=0.02%, 4=0.67%, 10=72.35%, 20=26.96% 00:43:20.520 cpu : usr=5.58%, sys=6.18%, ctx=497, majf=0, minf=1 00:43:20.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.520 issued rwts: total=6921,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.520 job3: (groupid=0, jobs=1): err= 0: pid=3495817: Tue Oct 1 16:00:59 2024 00:43:20.520 read: IOPS=5772, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1003msec) 00:43:20.520 slat (nsec): min=920, max=10931k, avg=80076.99, stdev=558441.73 00:43:20.520 clat (usec): min=1672, max=27071, avg=10375.43, stdev=2510.47 00:43:20.520 lat (usec): min=3678, max=27098, avg=10455.51, stdev=2553.77 00:43:20.520 clat percentiles (usec): 00:43:20.520 | 1.00th=[ 5604], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[ 8979], 00:43:20.520 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[10028], 00:43:20.520 | 70.00th=[10552], 80.00th=[11600], 90.00th=[13173], 95.00th=[16188], 00:43:20.520 | 99.00th=[19006], 99.50th=[19268], 99.90th=[22938], 99.95th=[26084], 00:43:20.520 | 99.99th=[27132] 00:43:20.520 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:43:20.520 slat (nsec): min=1524, max=10170k, avg=83106.90, stdev=520063.65 00:43:20.520 clat (usec): min=1180, max=73793, avg=10938.90, stdev=8139.56 00:43:20.520 lat (usec): min=1189, max=73802, avg=11022.01, stdev=8188.44 00:43:20.520 clat percentiles (usec): 00:43:20.520 | 1.00th=[ 5211], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 8455], 00:43:20.520 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:43:20.520 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[13173], 95.00th=[16909], 00:43:20.520 | 99.00th=[63701], 99.50th=[68682], 99.90th=[73925], 99.95th=[73925], 00:43:20.520 | 99.99th=[73925] 00:43:20.520 bw ( KiB/s): min=20528, max=28624, per=21.39%, avg=24576.00, stdev=5724.74, samples=2 00:43:20.520 iops : min= 5132, max= 7156, avg=6144.00, stdev=1431.18, samples=2 00:43:20.520 lat (msec) : 2=0.08%, 4=0.10%, 10=66.70%, 20=30.76%, 50=1.42% 00:43:20.520 lat (msec) : 100=0.93% 00:43:20.520 cpu : usr=2.89%, sys=6.19%, ctx=532, majf=0, minf=1 00:43:20.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.520 issued rwts: total=5790,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.520 00:43:20.520 Run status group 0 (all jobs): 00:43:20.520 READ: bw=109MiB/s (115MB/s), 22.5MiB/s-31.9MiB/s (23.6MB/s-33.4MB/s), io=110MiB (115MB), run=1003-1004msec 00:43:20.520 WRITE: bw=112MiB/s (118MB/s), 23.9MiB/s-32.0MiB/s (25.1MB/s-33.5MB/s), io=113MiB (118MB), run=1003-1004msec 00:43:20.520 00:43:20.520 Disk stats (read/write): 00:43:20.520 nvme0n1: ios=6706/7024, merge=0/0, ticks=50023/49585, in_queue=99608, util=87.47% 00:43:20.520 nvme0n2: ios=6020/6144, merge=0/0, ticks=37098/34765, in_queue=71863, util=92.96% 00:43:20.520 nvme0n3: ios=5632/6135, merge=0/0, ticks=50885/50526, in_queue=101411, util=88.37% 00:43:20.520 nvme0n4: ios=4608/5017, merge=0/0, ticks=35164/42934, in_queue=78098, util=89.51% 00:43:20.520 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:20.520 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3496113 00:43:20.520 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:20.520 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:20.520 [global] 00:43:20.520 thread=1 00:43:20.520 invalidate=1 00:43:20.520 rw=read 00:43:20.520 time_based=1 00:43:20.520 runtime=10 00:43:20.520 ioengine=libaio 00:43:20.520 direct=1 00:43:20.520 bs=4096 00:43:20.520 iodepth=1 00:43:20.520 norandommap=1 00:43:20.520 numjobs=1 00:43:20.520 00:43:20.520 [job0] 00:43:20.520 filename=/dev/nvme0n1 00:43:20.520 [job1] 00:43:20.520 filename=/dev/nvme0n2 00:43:20.520 [job2] 00:43:20.520 filename=/dev/nvme0n3 00:43:20.520 [job3] 00:43:20.520 filename=/dev/nvme0n4 00:43:20.520 Could not set queue depth (nvme0n1) 00:43:20.520 Could not set queue depth (nvme0n2) 00:43:20.520 Could not set queue depth (nvme0n3) 00:43:20.520 Could not set queue depth (nvme0n4) 00:43:20.781 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:20.781 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:20.781 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:20.781 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:20.781 fio-3.35 00:43:20.781 Starting 4 threads 00:43:23.376 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:23.731 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=6737920, buflen=4096 00:43:23.731 fio: pid=3496337, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:23.731 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:23.731 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=20520960, buflen=4096 00:43:23.731 fio: pid=3496336, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:23.731 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:23.731 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:24.075 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:24.075 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:24.075 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7413760, buflen=4096 00:43:24.075 fio: pid=3496332, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:24.075 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14352384, buflen=4096 00:43:24.075 fio: pid=3496333, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:24.075 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:24.075 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:24.075 00:43:24.075 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3496332: Tue Oct 1 16:01:03 2024 00:43:24.075 read: IOPS=616, BW=2465KiB/s (2524kB/s)(7240KiB/2937msec) 00:43:24.075 slat (usec): min=6, max=221, avg=24.23, stdev= 8.56 00:43:24.075 clat (usec): min=355, max=42088, avg=1580.27, stdev=5510.74 00:43:24.075 lat (usec): min=381, max=42113, avg=1604.51, stdev=5511.44 00:43:24.076 clat percentiles (usec): 00:43:24.076 | 1.00th=[ 611], 5.00th=[ 676], 10.00th=[ 717], 20.00th=[ 766], 00:43:24.076 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 832], 60.00th=[ 848], 00:43:24.076 | 70.00th=[ 865], 80.00th=[ 881], 90.00th=[ 906], 95.00th=[ 938], 00:43:24.076 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:43:24.076 | 99.99th=[42206] 00:43:24.076 bw ( KiB/s): min= 104, max= 4696, per=18.62%, avg=2880.00, stdev=1716.68, samples=5 00:43:24.076 iops : min= 26, max= 1174, avg=720.00, stdev=429.17, samples=5 00:43:24.076 lat (usec) : 500=0.11%, 750=15.18%, 1000=82.50% 00:43:24.076 lat (msec) : 2=0.28%, 50=1.88% 00:43:24.076 cpu : usr=0.58%, sys=1.77%, ctx=1813, majf=0, minf=1 00:43:24.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:24.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 issued rwts: total=1811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:24.076 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3496333: Tue Oct 1 16:01:03 2024 00:43:24.076 read: IOPS=1132, BW=4527KiB/s (4636kB/s)(13.7MiB/3096msec) 00:43:24.076 slat (usec): min=6, max=16196, avg=48.31, stdev=592.57 00:43:24.076 clat (usec): min=304, max=41698, avg=821.56, stdev=978.08 00:43:24.076 lat (usec): min=311, max=41705, avg=869.88, stdev=1142.66 00:43:24.076 clat percentiles (usec): 00:43:24.076 | 1.00th=[ 486], 5.00th=[ 611], 10.00th=[ 668], 20.00th=[ 742], 00:43:24.076 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 816], 60.00th=[ 832], 00:43:24.076 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 922], 00:43:24.076 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 2089], 99.95th=[41157], 00:43:24.076 | 99.99th=[41681] 00:43:24.076 bw ( KiB/s): min= 3992, max= 4944, per=29.55%, avg=4569.67, stdev=355.60, samples=6 00:43:24.076 iops : min= 998, max= 1236, avg=1142.33, stdev=88.96, samples=6 00:43:24.076 lat (usec) : 500=1.17%, 750=21.46%, 1000=77.03% 00:43:24.076 lat (msec) : 2=0.20%, 4=0.03%, 10=0.03%, 50=0.06% 00:43:24.076 cpu : usr=0.94%, sys=3.33%, ctx=3511, majf=0, minf=2 00:43:24.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:24.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 issued rwts: total=3505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:24.076 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3496336: Tue Oct 1 16:01:03 2024 00:43:24.076 read: IOPS=1844, BW=7376KiB/s (7553kB/s)(19.6MiB/2717msec) 00:43:24.076 slat (usec): min=6, max=16300, avg=29.17, stdev=305.85 00:43:24.076 clat (usec): min=155, max=1882, avg=503.35, stdev=126.67 00:43:24.076 lat (usec): min=164, max=17059, avg=532.53, stdev=336.06 00:43:24.076 clat percentiles (usec): 00:43:24.076 | 1.00th=[ 180], 5.00th=[ 277], 10.00th=[ 306], 20.00th=[ 388], 00:43:24.076 | 30.00th=[ 453], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 553], 00:43:24.076 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 635], 95.00th=[ 668], 00:43:24.076 | 99.00th=[ 775], 99.50th=[ 816], 99.90th=[ 873], 99.95th=[ 906], 00:43:24.076 | 99.99th=[ 1876] 00:43:24.076 bw ( KiB/s): min= 7296, max= 7552, per=48.25%, avg=7462.40, stdev=98.21, samples=5 00:43:24.076 iops : min= 1824, max= 1888, avg=1865.60, stdev=24.55, samples=5 00:43:24.076 lat (usec) : 250=2.55%, 500=35.70%, 750=60.33%, 1000=1.38% 00:43:24.076 lat (msec) : 2=0.02% 00:43:24.076 cpu : usr=1.84%, sys=4.93%, ctx=5014, majf=0, minf=2 00:43:24.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:24.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 issued rwts: total=5011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:24.076 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3496337: Tue Oct 1 16:01:03 2024 00:43:24.076 read: IOPS=650, BW=2601KiB/s (2663kB/s)(6580KiB/2530msec) 00:43:24.076 slat (nsec): min=7312, max=58622, avg=25612.38, stdev=2964.61 00:43:24.076 clat (usec): min=693, max=42390, avg=1492.07, stdev=3313.97 00:43:24.076 lat (usec): min=723, max=42411, avg=1517.68, stdev=3313.92 00:43:24.076 clat percentiles (usec): 00:43:24.076 | 1.00th=[ 898], 5.00th=[ 1004], 10.00th=[ 1074], 20.00th=[ 1139], 00:43:24.076 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1221], 60.00th=[ 1254], 00:43:24.076 | 70.00th=[ 1287], 80.00th=[ 1319], 90.00th=[ 1352], 95.00th=[ 1401], 00:43:24.076 | 99.00th=[ 1532], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:43:24.076 | 99.99th=[42206] 00:43:24.076 bw ( KiB/s): min= 1136, max= 3184, per=16.98%, avg=2625.60, stdev=865.36, samples=5 00:43:24.076 iops : min= 284, max= 796, avg=656.40, stdev=216.34, samples=5 00:43:24.076 lat (usec) : 750=0.12%, 1000=4.25% 00:43:24.076 lat (msec) : 2=94.90%, 50=0.67% 00:43:24.076 cpu : usr=0.95%, sys=1.70%, ctx=1646, majf=0, minf=2 00:43:24.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:24.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.076 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:24.076 00:43:24.076 Run status group 0 (all jobs): 00:43:24.076 READ: bw=15.1MiB/s (15.8MB/s), 2465KiB/s-7376KiB/s (2524kB/s-7553kB/s), io=46.8MiB (49.0MB), run=2530-3096msec 00:43:24.076 00:43:24.076 Disk stats (read/write): 00:43:24.076 nvme0n1: ios=1806/0, merge=0/0, ticks=2639/0, in_queue=2639, util=92.62% 00:43:24.076 nvme0n2: ios=3449/0, merge=0/0, ticks=2744/0, in_queue=2744, util=91.59% 00:43:24.076 nvme0n3: ios=4739/0, merge=0/0, ticks=2276/0, in_queue=2276, util=95.46% 00:43:24.076 nvme0n4: ios=1646/0, merge=0/0, ticks=2407/0, in_queue=2407, util=96.11% 00:43:24.427 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:24.428 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:24.428 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:24.428 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:24.687 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:24.687 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:24.948 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:24.948 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:24.948 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:24.948 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3496113 00:43:24.948 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:24.948 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:25.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:43:25.208 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:25.209 nvmf hotplug test: fio failed as expected 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:25.209 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:25.470 rmmod nvme_tcp 00:43:25.470 rmmod nvme_fabrics 00:43:25.470 rmmod nvme_keyring 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3492869 ']' 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3492869 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3492869 ']' 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3492869 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3492869 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3492869' 00:43:25.470 killing process with pid 3492869 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3492869 00:43:25.470 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3492869 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:25.731 16:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:27.644 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:27.644 00:43:27.644 real 0m28.012s 00:43:27.644 user 2m16.283s 00:43:27.644 sys 0m12.391s 00:43:27.644 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:27.644 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:27.644 ************************************ 00:43:27.644 END TEST nvmf_fio_target 00:43:27.644 ************************************ 00:43:27.644 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:27.644 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:27.645 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:27.645 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:27.645 ************************************ 00:43:27.645 START TEST nvmf_bdevio 00:43:27.645 ************************************ 00:43:27.645 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:27.906 * Looking for test storage... 00:43:27.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.906 --rc genhtml_branch_coverage=1 00:43:27.906 --rc genhtml_function_coverage=1 00:43:27.906 --rc genhtml_legend=1 00:43:27.906 --rc geninfo_all_blocks=1 00:43:27.906 --rc geninfo_unexecuted_blocks=1 00:43:27.906 00:43:27.906 ' 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.906 --rc genhtml_branch_coverage=1 00:43:27.906 --rc genhtml_function_coverage=1 00:43:27.906 --rc genhtml_legend=1 00:43:27.906 --rc geninfo_all_blocks=1 00:43:27.906 --rc geninfo_unexecuted_blocks=1 00:43:27.906 00:43:27.906 ' 00:43:27.906 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.906 --rc genhtml_branch_coverage=1 00:43:27.906 --rc genhtml_function_coverage=1 00:43:27.906 --rc genhtml_legend=1 00:43:27.907 --rc geninfo_all_blocks=1 00:43:27.907 --rc geninfo_unexecuted_blocks=1 00:43:27.907 00:43:27.907 ' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:27.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.907 --rc genhtml_branch_coverage=1 00:43:27.907 --rc genhtml_function_coverage=1 00:43:27.907 --rc genhtml_legend=1 00:43:27.907 --rc geninfo_all_blocks=1 00:43:27.907 --rc geninfo_unexecuted_blocks=1 00:43:27.907 00:43:27.907 ' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:27.907 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:36.043 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:36.043 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:36.044 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:36.044 Found net devices under 0000:31:00.0: cvl_0_0 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:36.044 Found net devices under 0000:31:00.1: cvl_0_1 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:36.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:36.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:43:36.044 00:43:36.044 --- 10.0.0.2 ping statistics --- 00:43:36.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.044 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:36.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:36.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:43:36.044 00:43:36.044 --- 10.0.0.1 ping statistics --- 00:43:36.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.044 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3501434 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3501434 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3501434 ']' 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:36.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:36.044 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.044 [2024-10-01 16:01:14.750574] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:36.044 [2024-10-01 16:01:14.751694] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:43:36.044 [2024-10-01 16:01:14.751747] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:36.044 [2024-10-01 16:01:14.793393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:36.044 [2024-10-01 16:01:14.842193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:36.045 [2024-10-01 16:01:14.891468] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:36.045 [2024-10-01 16:01:14.891524] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:36.045 [2024-10-01 16:01:14.891532] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:36.045 [2024-10-01 16:01:14.891540] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:36.045 [2024-10-01 16:01:14.891546] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:36.045 [2024-10-01 16:01:14.891710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:43:36.045 [2024-10-01 16:01:14.891869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:43:36.045 [2024-10-01 16:01:14.892030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:43:36.045 [2024-10-01 16:01:14.892175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:36.045 [2024-10-01 16:01:14.973059] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:36.045 [2024-10-01 16:01:14.974253] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:36.045 [2024-10-01 16:01:14.974418] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:36.045 [2024-10-01 16:01:14.974874] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:36.045 [2024-10-01 16:01:14.974929] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.305 [2024-10-01 16:01:15.585030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.305 Malloc0 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:36.305 [2024-10-01 16:01:15.665341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:36.305 { 00:43:36.305 "params": { 00:43:36.305 "name": "Nvme$subsystem", 00:43:36.305 "trtype": "$TEST_TRANSPORT", 00:43:36.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:36.305 "adrfam": "ipv4", 00:43:36.305 "trsvcid": "$NVMF_PORT", 00:43:36.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:36.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:36.305 "hdgst": ${hdgst:-false}, 00:43:36.305 "ddgst": ${ddgst:-false} 00:43:36.305 }, 00:43:36.305 "method": "bdev_nvme_attach_controller" 00:43:36.305 } 00:43:36.305 EOF 00:43:36.305 )") 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:43:36.305 16:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:36.305 "params": { 00:43:36.305 "name": "Nvme1", 00:43:36.305 "trtype": "tcp", 00:43:36.305 "traddr": "10.0.0.2", 00:43:36.305 "adrfam": "ipv4", 00:43:36.305 "trsvcid": "4420", 00:43:36.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:36.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:36.305 "hdgst": false, 00:43:36.305 "ddgst": false 00:43:36.305 }, 00:43:36.305 "method": "bdev_nvme_attach_controller" 00:43:36.305 }' 00:43:36.305 [2024-10-01 16:01:15.727805] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:43:36.305 [2024-10-01 16:01:15.727853] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501542 ] 00:43:36.305 [2024-10-01 16:01:15.758405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:36.567 [2024-10-01 16:01:15.809132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:36.567 [2024-10-01 16:01:15.843232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:36.567 [2024-10-01 16:01:15.843386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:36.567 [2024-10-01 16:01:15.843387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:36.567 I/O targets: 00:43:36.567 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:36.567 00:43:36.567 00:43:36.567 CUnit - A unit testing framework for C - Version 2.1-3 00:43:36.567 http://cunit.sourceforge.net/ 00:43:36.567 00:43:36.567 00:43:36.567 Suite: bdevio tests on: Nvme1n1 00:43:36.827 Test: blockdev write read block ...passed 00:43:36.827 Test: blockdev write zeroes read block ...passed 00:43:36.827 Test: blockdev write zeroes read no split ...passed 00:43:36.827 Test: blockdev write zeroes read split ...passed 00:43:36.827 Test: blockdev write zeroes read split partial ...passed 00:43:36.827 Test: blockdev reset ...[2024-10-01 16:01:16.170221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.827 [2024-10-01 16:01:16.170296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074c50 (9): Bad file descriptor 00:43:36.827 [2024-10-01 16:01:16.222237] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:36.827 passed 00:43:36.827 Test: blockdev write read 8 blocks ...passed 00:43:36.827 Test: blockdev write read size > 128k ...passed 00:43:36.827 Test: blockdev write read invalid size ...passed 00:43:37.089 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:37.089 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:37.089 Test: blockdev write read max offset ...passed 00:43:37.089 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:37.089 Test: blockdev writev readv 8 blocks ...passed 00:43:37.089 Test: blockdev writev readv 30 x 1block ...passed 00:43:37.089 Test: blockdev writev readv block ...passed 00:43:37.089 Test: blockdev writev readv size > 128k ...passed 00:43:37.089 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:37.089 Test: blockdev comparev and writev ...[2024-10-01 16:01:16.449853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.449888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.449909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.449918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.450502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.450514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.450528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.450537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.451105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.451117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.451131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.451139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.451711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.451722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.451736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:37.089 [2024-10-01 16:01:16.451744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:37.089 passed 00:43:37.089 Test: blockdev nvme passthru rw ...passed 00:43:37.089 Test: blockdev nvme passthru vendor specific ...[2024-10-01 16:01:16.536795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:37.089 [2024-10-01 16:01:16.536812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.537199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:37.089 [2024-10-01 16:01:16.537210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.537558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:37.089 [2024-10-01 16:01:16.537569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:37.089 [2024-10-01 16:01:16.537946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:37.089 [2024-10-01 16:01:16.537957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:37.089 passed 00:43:37.350 Test: blockdev nvme admin passthru ...passed 00:43:37.350 Test: blockdev copy ...passed 00:43:37.350 00:43:37.350 Run Summary: Type Total Ran Passed Failed Inactive 00:43:37.350 suites 1 1 n/a 0 0 00:43:37.350 tests 23 23 23 0 0 00:43:37.350 asserts 152 152 152 0 n/a 00:43:37.350 00:43:37.350 Elapsed time = 1.162 seconds 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:37.350 rmmod nvme_tcp 00:43:37.350 rmmod nvme_fabrics 00:43:37.350 rmmod nvme_keyring 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3501434 ']' 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3501434 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3501434 ']' 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3501434 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:43:37.350 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:37.610 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3501434 00:43:37.610 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:43:37.610 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:43:37.610 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3501434' 00:43:37.610 killing process with pid 3501434 00:43:37.610 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3501434 00:43:37.610 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3501434 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:37.872 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:39.784 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:39.784 00:43:39.784 real 0m12.068s 00:43:39.784 user 0m9.082s 00:43:39.784 sys 0m6.440s 00:43:39.784 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:39.784 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.784 ************************************ 00:43:39.784 END TEST nvmf_bdevio 00:43:39.784 ************************************ 00:43:39.784 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:39.784 00:43:39.784 real 4m58.578s 00:43:39.784 user 10m16.054s 00:43:39.784 sys 2m8.419s 00:43:39.784 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:39.784 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:39.784 ************************************ 00:43:39.784 END TEST nvmf_target_core_interrupt_mode 00:43:39.784 ************************************ 00:43:40.046 16:01:19 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:40.046 16:01:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:40.046 16:01:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:40.046 16:01:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:40.046 ************************************ 00:43:40.046 START TEST nvmf_interrupt 00:43:40.046 ************************************ 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:40.046 * Looking for test storage... 00:43:40.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:40.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.046 --rc genhtml_branch_coverage=1 00:43:40.046 --rc genhtml_function_coverage=1 00:43:40.046 --rc genhtml_legend=1 00:43:40.046 --rc geninfo_all_blocks=1 00:43:40.046 --rc geninfo_unexecuted_blocks=1 00:43:40.046 00:43:40.046 ' 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:40.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.046 --rc genhtml_branch_coverage=1 00:43:40.046 --rc genhtml_function_coverage=1 00:43:40.046 --rc genhtml_legend=1 00:43:40.046 --rc geninfo_all_blocks=1 00:43:40.046 --rc geninfo_unexecuted_blocks=1 00:43:40.046 00:43:40.046 ' 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:40.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.046 --rc genhtml_branch_coverage=1 00:43:40.046 --rc genhtml_function_coverage=1 00:43:40.046 --rc genhtml_legend=1 00:43:40.046 --rc geninfo_all_blocks=1 00:43:40.046 --rc geninfo_unexecuted_blocks=1 00:43:40.046 00:43:40.046 ' 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:40.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.046 --rc genhtml_branch_coverage=1 00:43:40.046 --rc genhtml_function_coverage=1 00:43:40.046 --rc genhtml_legend=1 00:43:40.046 --rc geninfo_all_blocks=1 00:43:40.046 --rc geninfo_unexecuted_blocks=1 00:43:40.046 00:43:40.046 ' 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:40.046 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:40.047 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:40.308 16:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:48.443 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:48.443 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:48.443 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:48.444 Found net devices under 0000:31:00.0: cvl_0_0 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:48.444 Found net devices under 0000:31:00.1: cvl_0_1 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:48.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:48.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:43:48.444 00:43:48.444 --- 10.0.0.2 ping statistics --- 00:43:48.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:48.444 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:48.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:48.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:43:48.444 00:43:48.444 --- 10.0.0.1 ping statistics --- 00:43:48.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:48.444 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=3506016 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 3506016 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3506016 ']' 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:48.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:48.444 16:01:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.444 [2024-10-01 16:01:26.969880] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:48.444 [2024-10-01 16:01:26.970869] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:43:48.444 [2024-10-01 16:01:26.970912] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:48.444 [2024-10-01 16:01:27.007417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:48.444 [2024-10-01 16:01:27.056111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:48.444 [2024-10-01 16:01:27.087738] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:48.444 [2024-10-01 16:01:27.087773] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:48.444 [2024-10-01 16:01:27.087786] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:48.444 [2024-10-01 16:01:27.087794] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:48.444 [2024-10-01 16:01:27.087801] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:48.444 [2024-10-01 16:01:27.087964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:48.444 [2024-10-01 16:01:27.087966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:48.444 [2024-10-01 16:01:27.136043] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:48.444 [2024-10-01 16:01:27.136822] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:48.444 [2024-10-01 16:01:27.137095] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:48.444 5000+0 records in 00:43:48.444 5000+0 records out 00:43:48.444 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0191151 s, 536 MB/s 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.444 AIO0 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.444 [2024-10-01 16:01:27.872886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.444 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:48.705 [2024-10-01 16:01:27.921318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3506016 0 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3506016 0 idle 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:43:48.705 16:01:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3506016 1 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3506016 1 idle 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:43:48.705 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3506253 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3506016 0 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3506016 0 busy 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:43:48.967 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506016 root 20 0 128.2g 44928 32256 R 18.8 0.0 0:00.27 reactor_0' 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506016 root 20 0 128.2g 44928 32256 R 18.8 0.0 0:00.27 reactor_0 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=18.8 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=18 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:49.229 16:01:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:43:50.171 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:43:50.171 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:50.171 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:43:50.171 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:50.431 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506016 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.63 reactor_0' 00:43:50.431 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506016 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.63 reactor_0 00:43:50.431 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:50.431 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:50.431 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3506016 1 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3506016 1 busy 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506054 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.39 reactor_1' 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506054 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.39 reactor_1 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:50.432 16:01:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3506253 00:44:00.426 Initializing NVMe Controllers 00:44:00.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:00.426 Controller IO queue size 256, less than required. 00:44:00.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:00.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:00.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:00.426 Initialization complete. Launching workers. 00:44:00.426 ======================================================== 00:44:00.426 Latency(us) 00:44:00.426 Device Information : IOPS MiB/s Average min max 00:44:00.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20319.50 79.37 12602.73 3918.41 30628.21 00:44:00.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18237.20 71.24 14039.35 8099.60 31837.20 00:44:00.426 ======================================================== 00:44:00.426 Total : 38556.70 150.61 13282.25 3918.41 31837.20 00:44:00.426 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3506016 0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3506016 0 idle 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.22 reactor_0' 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.22 reactor_0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3506016 1 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3506016 1 idle 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:00.426 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:00.427 16:01:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:00.427 16:01:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:00.427 16:01:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:44:00.427 16:01:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:44:00.427 16:01:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:44:00.427 16:01:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:44:00.427 16:01:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3506016 0 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3506016 0 idle 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:44:02.345 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506016 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.60 reactor_0' 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506016 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.60 reactor_0 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3506016 1 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3506016 1 idle 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3506016 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:02.605 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3506016 -w 256 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3506054 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3506054 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:02.606 16:01:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:02.606 16:01:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:02.606 16:01:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:02.606 16:01:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:02.606 16:01:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:02.606 16:01:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:02.606 16:01:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:02.606 16:01:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:02.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:02.867 rmmod nvme_tcp 00:44:02.867 rmmod nvme_fabrics 00:44:02.867 rmmod nvme_keyring 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 3506016 ']' 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 3506016 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3506016 ']' 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3506016 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3506016 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3506016' 00:44:02.867 killing process with pid 3506016 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3506016 00:44:02.867 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3506016 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:03.127 16:01:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:05.671 16:01:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:05.671 00:44:05.671 real 0m25.247s 00:44:05.671 user 0m40.156s 00:44:05.671 sys 0m9.733s 00:44:05.671 16:01:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:05.671 16:01:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:05.671 ************************************ 00:44:05.671 END TEST nvmf_interrupt 00:44:05.671 ************************************ 00:44:05.671 00:44:05.671 real 38m13.150s 00:44:05.671 user 91m41.441s 00:44:05.671 sys 11m30.317s 00:44:05.671 16:01:44 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:05.671 16:01:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:05.671 ************************************ 00:44:05.671 END TEST nvmf_tcp 00:44:05.671 ************************************ 00:44:05.671 16:01:44 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:44:05.671 16:01:44 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:05.671 16:01:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:44:05.671 16:01:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:05.671 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:44:05.671 ************************************ 00:44:05.671 START TEST spdkcli_nvmf_tcp 00:44:05.671 ************************************ 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:05.671 * Looking for test storage... 00:44:05.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:05.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.671 --rc genhtml_branch_coverage=1 00:44:05.671 --rc genhtml_function_coverage=1 00:44:05.671 --rc genhtml_legend=1 00:44:05.671 --rc geninfo_all_blocks=1 00:44:05.671 --rc geninfo_unexecuted_blocks=1 00:44:05.671 00:44:05.671 ' 00:44:05.671 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:05.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.671 --rc genhtml_branch_coverage=1 00:44:05.671 --rc genhtml_function_coverage=1 00:44:05.671 --rc genhtml_legend=1 00:44:05.671 --rc geninfo_all_blocks=1 00:44:05.671 --rc geninfo_unexecuted_blocks=1 00:44:05.672 00:44:05.672 ' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.672 --rc genhtml_branch_coverage=1 00:44:05.672 --rc genhtml_function_coverage=1 00:44:05.672 --rc genhtml_legend=1 00:44:05.672 --rc geninfo_all_blocks=1 00:44:05.672 --rc geninfo_unexecuted_blocks=1 00:44:05.672 00:44:05.672 ' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.672 --rc genhtml_branch_coverage=1 00:44:05.672 --rc genhtml_function_coverage=1 00:44:05.672 --rc genhtml_legend=1 00:44:05.672 --rc geninfo_all_blocks=1 00:44:05.672 --rc geninfo_unexecuted_blocks=1 00:44:05.672 00:44:05.672 ' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:05.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3509434 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3509434 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3509434 ']' 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:05.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:05.672 16:01:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:05.672 [2024-10-01 16:01:44.935359] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:44:05.672 [2024-10-01 16:01:44.935429] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509434 ] 00:44:05.672 [2024-10-01 16:01:44.970422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:05.672 [2024-10-01 16:01:45.020264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:05.672 [2024-10-01 16:01:45.067767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:05.672 [2024-10-01 16:01:45.067771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:06.615 16:01:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:06.615 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:06.615 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:06.615 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:06.615 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:06.615 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:06.615 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:06.615 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:06.615 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:06.616 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:06.616 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:06.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:06.616 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:06.616 ' 00:44:09.155 [2024-10-01 16:01:48.530954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:10.539 [2024-10-01 16:01:49.891128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:13.084 [2024-10-01 16:01:52.414250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:15.625 [2024-10-01 16:01:54.640563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:17.007 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:17.007 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:17.007 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:17.007 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:17.007 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:17.007 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:17.007 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:17.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:17.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:17.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:17.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:17.007 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:17.007 16:01:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:17.007 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:17.007 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:17.007 16:01:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:17.007 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:17.007 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:17.267 16:01:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:17.267 16:01:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:17.528 16:01:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:17.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:17.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:17.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:17.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:17.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:17.528 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:17.528 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:17.528 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:17.528 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:17.528 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:17.528 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:17.528 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:17.528 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:17.528 ' 00:44:24.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:24.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:24.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:24.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:24.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:24.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:24.108 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:24.108 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:24.108 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:24.108 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:24.108 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:24.108 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:24.108 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:24.108 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3509434 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3509434 ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3509434 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3509434 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3509434' 00:44:24.108 killing process with pid 3509434 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3509434 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3509434 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3509434 ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3509434 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3509434 ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3509434 00:44:24.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3509434) - No such process 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3509434 is not found' 00:44:24.108 Process with pid 3509434 is not found 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:24.108 00:44:24.108 real 0m18.185s 00:44:24.108 user 0m40.416s 00:44:24.108 sys 0m0.884s 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:24.108 16:02:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:24.108 ************************************ 00:44:24.109 END TEST spdkcli_nvmf_tcp 00:44:24.109 ************************************ 00:44:24.109 16:02:02 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:24.109 16:02:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:44:24.109 16:02:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:24.109 16:02:02 -- common/autotest_common.sh@10 -- # set +x 00:44:24.109 ************************************ 00:44:24.109 START TEST nvmf_identify_passthru 00:44:24.109 ************************************ 00:44:24.109 16:02:02 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:24.109 * Looking for test storage... 00:44:24.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.109 --rc genhtml_branch_coverage=1 00:44:24.109 --rc genhtml_function_coverage=1 00:44:24.109 --rc genhtml_legend=1 00:44:24.109 --rc geninfo_all_blocks=1 00:44:24.109 --rc geninfo_unexecuted_blocks=1 00:44:24.109 00:44:24.109 ' 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.109 --rc genhtml_branch_coverage=1 00:44:24.109 --rc genhtml_function_coverage=1 00:44:24.109 --rc genhtml_legend=1 00:44:24.109 --rc geninfo_all_blocks=1 00:44:24.109 --rc geninfo_unexecuted_blocks=1 00:44:24.109 00:44:24.109 ' 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.109 --rc genhtml_branch_coverage=1 00:44:24.109 --rc genhtml_function_coverage=1 00:44:24.109 --rc genhtml_legend=1 00:44:24.109 --rc geninfo_all_blocks=1 00:44:24.109 --rc geninfo_unexecuted_blocks=1 00:44:24.109 00:44:24.109 ' 00:44:24.109 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.109 --rc genhtml_branch_coverage=1 00:44:24.109 --rc genhtml_function_coverage=1 00:44:24.109 --rc genhtml_legend=1 00:44:24.109 --rc geninfo_all_blocks=1 00:44:24.109 --rc geninfo_unexecuted_blocks=1 00:44:24.109 00:44:24.109 ' 00:44:24.109 16:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:24.109 16:02:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.109 16:02:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.109 16:02:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.109 16:02:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:24.109 16:02:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:24.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:24.109 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:24.109 16:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:24.109 16:02:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:24.110 16:02:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.110 16:02:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.110 16:02:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.110 16:02:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:24.110 16:02:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.110 16:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:24.110 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:24.110 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:24.110 16:02:03 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:24.110 16:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:32.247 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:32.248 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:32.248 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:32.248 Found net devices under 0000:31:00.0: cvl_0_0 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:32.248 Found net devices under 0000:31:00.1: cvl_0_1 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:32.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:32.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:44:32.248 00:44:32.248 --- 10.0.0.2 ping statistics --- 00:44:32.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.248 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:32.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:32.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.437 ms 00:44:32.248 00:44:32.248 --- 10.0.0.1 ping statistics --- 00:44:32.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.248 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:32.248 16:02:10 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:32.248 16:02:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.248 16:02:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:44:32.248 16:02:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:44:32.248 16:02:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:44:32.248 16:02:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:44:32.248 16:02:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:32.248 16:02:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:32.248 16:02:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:32.248 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:44:32.248 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:32.248 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:32.249 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:32.509 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:44:32.509 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.509 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.509 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3516928 00:44:32.509 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:32.509 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:32.509 16:02:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3516928 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3516928 ']' 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:32.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:32.509 16:02:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.509 [2024-10-01 16:02:11.877439] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:44:32.509 [2024-10-01 16:02:11.877505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:32.509 [2024-10-01 16:02:11.918509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:32.770 [2024-10-01 16:02:11.966492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:32.770 [2024-10-01 16:02:12.014753] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:32.770 [2024-10-01 16:02:12.014804] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:32.770 [2024-10-01 16:02:12.014816] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:32.770 [2024-10-01 16:02:12.014827] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:32.770 [2024-10-01 16:02:12.014834] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:32.770 [2024-10-01 16:02:12.014998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:32.770 [2024-10-01 16:02:12.015207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:32.770 [2024-10-01 16:02:12.015208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:32.770 [2024-10-01 16:02:12.015049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:33.342 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:33.342 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:44:33.342 16:02:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:33.342 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.342 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.342 INFO: Log level set to 20 00:44:33.342 INFO: Requests: 00:44:33.342 { 00:44:33.342 "jsonrpc": "2.0", 00:44:33.342 "method": "nvmf_set_config", 00:44:33.342 "id": 1, 00:44:33.342 "params": { 00:44:33.342 "admin_cmd_passthru": { 00:44:33.342 "identify_ctrlr": true 00:44:33.342 } 00:44:33.342 } 00:44:33.342 } 00:44:33.342 00:44:33.342 INFO: response: 00:44:33.342 { 00:44:33.342 "jsonrpc": "2.0", 00:44:33.342 "id": 1, 00:44:33.342 "result": true 00:44:33.342 } 00:44:33.342 00:44:33.342 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.342 16:02:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:33.342 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.342 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.342 INFO: Setting log level to 20 00:44:33.342 INFO: Setting log level to 20 00:44:33.342 INFO: Log level set to 20 00:44:33.342 INFO: Log level set to 20 00:44:33.342 INFO: Requests: 00:44:33.342 { 00:44:33.342 "jsonrpc": "2.0", 00:44:33.342 "method": "framework_start_init", 00:44:33.342 "id": 1 00:44:33.342 } 00:44:33.342 00:44:33.342 INFO: Requests: 00:44:33.342 { 00:44:33.342 "jsonrpc": "2.0", 00:44:33.342 "method": "framework_start_init", 00:44:33.342 "id": 1 00:44:33.342 } 00:44:33.342 00:44:33.603 [2024-10-01 16:02:12.806842] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:33.603 INFO: response: 00:44:33.603 { 00:44:33.603 "jsonrpc": "2.0", 00:44:33.603 "id": 1, 00:44:33.603 "result": true 00:44:33.603 } 00:44:33.603 00:44:33.603 INFO: response: 00:44:33.603 { 00:44:33.603 "jsonrpc": "2.0", 00:44:33.603 "id": 1, 00:44:33.603 "result": true 00:44:33.603 } 00:44:33.603 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.603 16:02:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.603 INFO: Setting log level to 40 00:44:33.603 INFO: Setting log level to 40 00:44:33.603 INFO: Setting log level to 40 00:44:33.603 [2024-10-01 16:02:12.820436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.603 16:02:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.603 16:02:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.603 16:02:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.864 Nvme0n1 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.864 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.864 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.864 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.864 [2024-10-01 16:02:13.209642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.864 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.864 [ 00:44:33.864 { 00:44:33.864 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:33.864 "subtype": "Discovery", 00:44:33.864 "listen_addresses": [], 00:44:33.864 "allow_any_host": true, 00:44:33.864 "hosts": [] 00:44:33.864 }, 00:44:33.864 { 00:44:33.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:33.864 "subtype": "NVMe", 00:44:33.864 "listen_addresses": [ 00:44:33.864 { 00:44:33.864 "trtype": "TCP", 00:44:33.864 "adrfam": "IPv4", 00:44:33.864 "traddr": "10.0.0.2", 00:44:33.864 "trsvcid": "4420" 00:44:33.864 } 00:44:33.864 ], 00:44:33.864 "allow_any_host": true, 00:44:33.864 "hosts": [], 00:44:33.864 "serial_number": "SPDK00000000000001", 00:44:33.864 "model_number": "SPDK bdev Controller", 00:44:33.864 "max_namespaces": 1, 00:44:33.864 "min_cntlid": 1, 00:44:33.864 "max_cntlid": 65519, 00:44:33.864 "namespaces": [ 00:44:33.864 { 00:44:33.864 "nsid": 1, 00:44:33.864 "bdev_name": "Nvme0n1", 00:44:33.864 "name": "Nvme0n1", 00:44:33.864 "nguid": "3634473052605494002538450000002B", 00:44:33.864 "uuid": "36344730-5260-5494-0025-38450000002b" 00:44:33.864 } 00:44:33.864 ] 00:44:33.864 } 00:44:33.864 ] 00:44:33.864 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:33.864 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:33.864 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:33.864 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:34.125 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:44:34.125 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:34.125 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:34.125 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:34.386 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:44:34.386 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:44:34.386 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:44:34.386 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:34.386 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:34.386 16:02:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:34.386 rmmod nvme_tcp 00:44:34.386 rmmod nvme_fabrics 00:44:34.386 rmmod nvme_keyring 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 3516928 ']' 00:44:34.386 16:02:13 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 3516928 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3516928 ']' 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3516928 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3516928 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3516928' 00:44:34.386 killing process with pid 3516928 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3516928 00:44:34.386 16:02:13 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3516928 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:34.647 16:02:14 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:34.647 16:02:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:34.647 16:02:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:37.191 16:02:16 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:37.191 00:44:37.191 real 0m13.231s 00:44:37.191 user 0m10.525s 00:44:37.191 sys 0m6.586s 00:44:37.191 16:02:16 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:37.191 16:02:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:37.191 ************************************ 00:44:37.191 END TEST nvmf_identify_passthru 00:44:37.191 ************************************ 00:44:37.191 16:02:16 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:37.191 16:02:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:37.191 16:02:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:37.191 16:02:16 -- common/autotest_common.sh@10 -- # set +x 00:44:37.191 ************************************ 00:44:37.191 START TEST nvmf_dif 00:44:37.191 ************************************ 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:37.191 * Looking for test storage... 00:44:37.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:37.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.191 --rc genhtml_branch_coverage=1 00:44:37.191 --rc genhtml_function_coverage=1 00:44:37.191 --rc genhtml_legend=1 00:44:37.191 --rc geninfo_all_blocks=1 00:44:37.191 --rc geninfo_unexecuted_blocks=1 00:44:37.191 00:44:37.191 ' 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:37.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.191 --rc genhtml_branch_coverage=1 00:44:37.191 --rc genhtml_function_coverage=1 00:44:37.191 --rc genhtml_legend=1 00:44:37.191 --rc geninfo_all_blocks=1 00:44:37.191 --rc geninfo_unexecuted_blocks=1 00:44:37.191 00:44:37.191 ' 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:37.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.191 --rc genhtml_branch_coverage=1 00:44:37.191 --rc genhtml_function_coverage=1 00:44:37.191 --rc genhtml_legend=1 00:44:37.191 --rc geninfo_all_blocks=1 00:44:37.191 --rc geninfo_unexecuted_blocks=1 00:44:37.191 00:44:37.191 ' 00:44:37.191 16:02:16 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:37.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.191 --rc genhtml_branch_coverage=1 00:44:37.191 --rc genhtml_function_coverage=1 00:44:37.191 --rc genhtml_legend=1 00:44:37.191 --rc geninfo_all_blocks=1 00:44:37.191 --rc geninfo_unexecuted_blocks=1 00:44:37.191 00:44:37.191 ' 00:44:37.191 16:02:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:37.191 16:02:16 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:37.191 16:02:16 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:37.192 16:02:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.192 16:02:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.192 16:02:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.192 16:02:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:37.192 16:02:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:37.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:37.192 16:02:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:37.192 16:02:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:37.192 16:02:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:37.192 16:02:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:37.192 16:02:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:37.192 16:02:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:37.192 16:02:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:37.192 16:02:16 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:37.192 16:02:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:45.328 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:45.328 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:45.328 Found net devices under 0000:31:00.0: cvl_0_0 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:45.328 Found net devices under 0000:31:00.1: cvl_0_1 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:45.328 16:02:23 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:45.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:45.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:44:45.329 00:44:45.329 --- 10.0.0.2 ping statistics --- 00:44:45.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.329 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:45.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:45.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:44:45.329 00:44:45.329 --- 10.0.0.1 ping statistics --- 00:44:45.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.329 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:44:45.329 16:02:23 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:47.874 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:47.874 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:47.874 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:47.874 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:47.874 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:44:47.875 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:47.875 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:48.135 16:02:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:48.135 16:02:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=3522962 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 3522962 00:44:48.135 16:02:27 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3522962 ']' 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:48.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:48.135 16:02:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:48.395 [2024-10-01 16:02:27.596185] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:44:48.395 [2024-10-01 16:02:27.596235] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:48.395 [2024-10-01 16:02:27.633277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:48.395 [2024-10-01 16:02:27.680495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.395 [2024-10-01 16:02:27.712324] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:48.395 [2024-10-01 16:02:27.712360] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:48.395 [2024-10-01 16:02:27.712372] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:48.395 [2024-10-01 16:02:27.712380] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:48.395 [2024-10-01 16:02:27.712388] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:48.395 [2024-10-01 16:02:27.712411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.965 16:02:28 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:48.965 16:02:28 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:44:48.965 16:02:28 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:44:48.965 16:02:28 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:48.965 16:02:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:49.225 16:02:28 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:49.225 16:02:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:49.225 16:02:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:49.225 16:02:28 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.225 16:02:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:49.225 [2024-10-01 16:02:28.450127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:49.225 16:02:28 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.225 16:02:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:49.225 16:02:28 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:49.225 16:02:28 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:49.225 16:02:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:49.225 ************************************ 00:44:49.225 START TEST fio_dif_1_default 00:44:49.225 ************************************ 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.225 bdev_null0 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.225 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.226 [2024-10-01 16:02:28.542631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:49.226 { 00:44:49.226 "params": { 00:44:49.226 "name": "Nvme$subsystem", 00:44:49.226 "trtype": "$TEST_TRANSPORT", 00:44:49.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:49.226 "adrfam": "ipv4", 00:44:49.226 "trsvcid": "$NVMF_PORT", 00:44:49.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:49.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:49.226 "hdgst": ${hdgst:-false}, 00:44:49.226 "ddgst": ${ddgst:-false} 00:44:49.226 }, 00:44:49.226 "method": "bdev_nvme_attach_controller" 00:44:49.226 } 00:44:49.226 EOF 00:44:49.226 )") 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:49.226 "params": { 00:44:49.226 "name": "Nvme0", 00:44:49.226 "trtype": "tcp", 00:44:49.226 "traddr": "10.0.0.2", 00:44:49.226 "adrfam": "ipv4", 00:44:49.226 "trsvcid": "4420", 00:44:49.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:49.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:49.226 "hdgst": false, 00:44:49.226 "ddgst": false 00:44:49.226 }, 00:44:49.226 "method": "bdev_nvme_attach_controller" 00:44:49.226 }' 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:49.226 16:02:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:49.793 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:49.793 fio-3.35 00:44:49.793 Starting 1 thread 00:45:02.210 00:45:02.210 filename0: (groupid=0, jobs=1): err= 0: pid=3523466: Tue Oct 1 16:02:39 2024 00:45:02.210 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:45:02.210 slat (nsec): min=5421, max=31945, avg=6442.18, stdev=1732.28 00:45:02.210 clat (usec): min=40851, max=43796, avg=41047.07, stdev=284.67 00:45:02.210 lat (usec): min=40860, max=43828, avg=41053.51, stdev=285.40 00:45:02.210 clat percentiles (usec): 00:45:02.210 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:45:02.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:02.210 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:45:02.210 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:45:02.210 | 99.99th=[43779] 00:45:02.210 bw ( KiB/s): min= 384, max= 416, per=99.58%, avg=388.80, stdev=11.72, samples=20 00:45:02.210 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:45:02.210 lat (msec) : 50=100.00% 00:45:02.210 cpu : usr=93.16%, sys=6.62%, ctx=13, majf=0, minf=247 00:45:02.210 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:02.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:02.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:02.210 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:02.210 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:02.210 00:45:02.210 Run status group 0 (all jobs): 00:45:02.210 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10020-10020msec 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 00:45:02.210 real 0m11.081s 00:45:02.210 user 0m19.021s 00:45:02.210 sys 0m1.084s 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 ************************************ 00:45:02.210 END TEST fio_dif_1_default 00:45:02.210 ************************************ 00:45:02.210 16:02:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:02.210 16:02:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:02.210 16:02:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 ************************************ 00:45:02.210 START TEST fio_dif_1_multi_subsystems 00:45:02.210 ************************************ 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 bdev_null0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 [2024-10-01 16:02:39.707203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 bdev_null1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:02.210 { 00:45:02.210 "params": { 00:45:02.210 "name": "Nvme$subsystem", 00:45:02.210 "trtype": "$TEST_TRANSPORT", 00:45:02.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:02.210 "adrfam": "ipv4", 00:45:02.210 "trsvcid": "$NVMF_PORT", 00:45:02.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:02.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:02.210 "hdgst": ${hdgst:-false}, 00:45:02.210 "ddgst": ${ddgst:-false} 00:45:02.210 }, 00:45:02.210 "method": "bdev_nvme_attach_controller" 00:45:02.210 } 00:45:02.210 EOF 00:45:02.210 )") 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:02.210 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:02.211 { 00:45:02.211 "params": { 00:45:02.211 "name": "Nvme$subsystem", 00:45:02.211 "trtype": "$TEST_TRANSPORT", 00:45:02.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:02.211 "adrfam": "ipv4", 00:45:02.211 "trsvcid": "$NVMF_PORT", 00:45:02.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:02.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:02.211 "hdgst": ${hdgst:-false}, 00:45:02.211 "ddgst": ${ddgst:-false} 00:45:02.211 }, 00:45:02.211 "method": "bdev_nvme_attach_controller" 00:45:02.211 } 00:45:02.211 EOF 00:45:02.211 )") 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:02.211 "params": { 00:45:02.211 "name": "Nvme0", 00:45:02.211 "trtype": "tcp", 00:45:02.211 "traddr": "10.0.0.2", 00:45:02.211 "adrfam": "ipv4", 00:45:02.211 "trsvcid": "4420", 00:45:02.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:02.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:02.211 "hdgst": false, 00:45:02.211 "ddgst": false 00:45:02.211 }, 00:45:02.211 "method": "bdev_nvme_attach_controller" 00:45:02.211 },{ 00:45:02.211 "params": { 00:45:02.211 "name": "Nvme1", 00:45:02.211 "trtype": "tcp", 00:45:02.211 "traddr": "10.0.0.2", 00:45:02.211 "adrfam": "ipv4", 00:45:02.211 "trsvcid": "4420", 00:45:02.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:02.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:02.211 "hdgst": false, 00:45:02.211 "ddgst": false 00:45:02.211 }, 00:45:02.211 "method": "bdev_nvme_attach_controller" 00:45:02.211 }' 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:02.211 16:02:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:02.211 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:02.211 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:02.211 fio-3.35 00:45:02.211 Starting 2 threads 00:45:12.212 00:45:12.212 filename0: (groupid=0, jobs=1): err= 0: pid=3525930: Tue Oct 1 16:02:50 2024 00:45:12.212 read: IOPS=95, BW=381KiB/s (391kB/s)(3824KiB/10024msec) 00:45:12.212 slat (nsec): min=5441, max=46095, avg=6442.82, stdev=2153.11 00:45:12.212 clat (usec): min=40861, max=42119, avg=41919.50, stdev=248.48 00:45:12.212 lat (usec): min=40869, max=42125, avg=41925.94, stdev=248.21 00:45:12.212 clat percentiles (usec): 00:45:12.212 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:45:12.212 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:45:12.212 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:45:12.212 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:12.212 | 99.99th=[42206] 00:45:12.212 bw ( KiB/s): min= 352, max= 384, per=49.39%, avg=380.80, stdev= 9.85, samples=20 00:45:12.212 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:45:12.212 lat (msec) : 50=100.00% 00:45:12.212 cpu : usr=95.51%, sys=4.24%, ctx=71, majf=0, minf=123 00:45:12.212 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.212 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.212 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:12.212 filename1: (groupid=0, jobs=1): err= 0: pid=3525931: Tue Oct 1 16:02:50 2024 00:45:12.212 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10001msec) 00:45:12.212 slat (nsec): min=5395, max=32194, avg=6252.48, stdev=1527.25 00:45:12.212 clat (usec): min=40887, max=42084, avg=41135.94, stdev=361.51 00:45:12.212 lat (usec): min=40893, max=42116, avg=41142.19, stdev=361.54 00:45:12.212 clat percentiles (usec): 00:45:12.212 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:45:12.212 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:12.212 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:45:12.212 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:12.212 | 99.99th=[42206] 00:45:12.212 bw ( KiB/s): min= 384, max= 416, per=50.30%, avg=387.37, stdev=10.09, samples=19 00:45:12.212 iops : min= 96, max= 104, avg=96.84, stdev= 2.52, samples=19 00:45:12.212 lat (msec) : 50=100.00% 00:45:12.212 cpu : usr=95.61%, sys=4.19%, ctx=10, majf=0, minf=147 00:45:12.212 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.212 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.212 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:12.212 00:45:12.212 Run status group 0 (all jobs): 00:45:12.212 READ: bw=769KiB/s (788kB/s), 381KiB/s-389KiB/s (391kB/s-398kB/s), io=7712KiB (7897kB), run=10001-10024msec 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.212 00:45:12.212 real 0m11.453s 00:45:12.212 user 0m34.562s 00:45:12.212 sys 0m1.224s 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:12.212 16:02:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:12.212 ************************************ 00:45:12.213 END TEST fio_dif_1_multi_subsystems 00:45:12.213 ************************************ 00:45:12.213 16:02:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:12.213 16:02:51 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:12.213 16:02:51 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:12.213 16:02:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:12.213 ************************************ 00:45:12.213 START TEST fio_dif_rand_params 00:45:12.213 ************************************ 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.213 bdev_null0 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.213 [2024-10-01 16:02:51.242234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:12.213 { 00:45:12.213 "params": { 00:45:12.213 "name": "Nvme$subsystem", 00:45:12.213 "trtype": "$TEST_TRANSPORT", 00:45:12.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:12.213 "adrfam": "ipv4", 00:45:12.213 "trsvcid": "$NVMF_PORT", 00:45:12.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:12.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:12.213 "hdgst": ${hdgst:-false}, 00:45:12.213 "ddgst": ${ddgst:-false} 00:45:12.213 }, 00:45:12.213 "method": "bdev_nvme_attach_controller" 00:45:12.213 } 00:45:12.213 EOF 00:45:12.213 )") 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:12.213 "params": { 00:45:12.213 "name": "Nvme0", 00:45:12.213 "trtype": "tcp", 00:45:12.213 "traddr": "10.0.0.2", 00:45:12.213 "adrfam": "ipv4", 00:45:12.213 "trsvcid": "4420", 00:45:12.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:12.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:12.213 "hdgst": false, 00:45:12.213 "ddgst": false 00:45:12.213 }, 00:45:12.213 "method": "bdev_nvme_attach_controller" 00:45:12.213 }' 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:12.213 16:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.474 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:12.474 ... 00:45:12.474 fio-3.35 00:45:12.474 Starting 3 threads 00:45:19.059 00:45:19.059 filename0: (groupid=0, jobs=1): err= 0: pid=3528125: Tue Oct 1 16:02:57 2024 00:45:19.059 read: IOPS=316, BW=39.5MiB/s (41.4MB/s)(199MiB/5047msec) 00:45:19.059 slat (nsec): min=7975, max=31806, avg=9173.59, stdev=1079.09 00:45:19.059 clat (usec): min=4236, max=89622, avg=9454.21, stdev=9077.09 00:45:19.059 lat (usec): min=4245, max=89631, avg=9463.38, stdev=9077.18 00:45:19.059 clat percentiles (usec): 00:45:19.059 | 1.00th=[ 4752], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6390], 00:45:19.059 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7635], 60.00th=[ 8029], 00:45:19.059 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10421], 95.00th=[11600], 00:45:19.059 | 99.00th=[49021], 99.50th=[50594], 99.90th=[89654], 99.95th=[89654], 00:45:19.059 | 99.99th=[89654] 00:45:19.059 bw ( KiB/s): min=32256, max=49408, per=37.79%, avg=40729.60, stdev=6486.96, samples=10 00:45:19.059 iops : min= 252, max= 386, avg=318.40, stdev=50.48, samples=10 00:45:19.059 lat (msec) : 10=86.96%, 20=8.97%, 50=3.45%, 100=0.63% 00:45:19.059 cpu : usr=94.13%, sys=5.59%, ctx=15, majf=0, minf=44 00:45:19.059 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:19.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:19.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:19.059 issued rwts: total=1595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:19.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:19.059 filename0: (groupid=0, jobs=1): err= 0: pid=3528126: Tue Oct 1 16:02:57 2024 00:45:19.059 read: IOPS=144, BW=18.0MiB/s (18.9MB/s)(90.2MiB/5003msec) 00:45:19.059 slat (nsec): min=5489, max=31111, avg=6193.02, stdev=1020.11 00:45:19.059 clat (msec): min=3, max=131, avg=20.78, stdev=23.50 00:45:19.059 lat (msec): min=3, max=131, avg=20.78, stdev=23.50 00:45:19.059 clat percentiles (msec): 00:45:19.059 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:45:19.059 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:45:19.059 | 70.00th=[ 11], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 54], 00:45:19.060 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 132], 99.95th=[ 132], 00:45:19.060 | 99.99th=[ 132] 00:45:19.060 bw ( KiB/s): min=12544, max=27648, per=17.63%, avg=19000.89, stdev=5472.32, samples=9 00:45:19.060 iops : min= 98, max= 216, avg=148.44, stdev=42.75, samples=9 00:45:19.060 lat (msec) : 4=0.14%, 10=65.37%, 20=9.00%, 50=13.99%, 100=11.08% 00:45:19.060 lat (msec) : 250=0.42% 00:45:19.060 cpu : usr=96.22%, sys=3.56%, ctx=34, majf=0, minf=68 00:45:19.060 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:19.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:19.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:19.060 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:19.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:19.060 filename0: (groupid=0, jobs=1): err= 0: pid=3528127: Tue Oct 1 16:02:57 2024 00:45:19.060 read: IOPS=383, BW=47.9MiB/s (50.2MB/s)(242MiB/5043msec) 00:45:19.060 slat (nsec): min=5435, max=31614, avg=8106.67, stdev=1455.64 00:45:19.060 clat (usec): min=3513, max=49077, avg=7794.97, stdev=7462.02 00:45:19.060 lat (usec): min=3522, max=49085, avg=7803.07, stdev=7462.13 00:45:19.060 clat percentiles (usec): 00:45:19.060 | 1.00th=[ 3785], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5407], 00:45:19.060 | 30.00th=[ 5735], 40.00th=[ 5997], 50.00th=[ 6259], 60.00th=[ 6587], 00:45:19.060 | 70.00th=[ 7046], 80.00th=[ 7701], 90.00th=[ 8455], 95.00th=[ 9241], 00:45:19.060 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:45:19.060 | 99.99th=[49021] 00:45:19.060 bw ( KiB/s): min=35072, max=58368, per=45.86%, avg=49433.60, stdev=8481.51, samples=10 00:45:19.060 iops : min= 274, max= 456, avg=386.20, stdev=66.26, samples=10 00:45:19.060 lat (msec) : 4=2.38%, 10=93.64%, 20=0.47%, 50=3.52% 00:45:19.060 cpu : usr=93.99%, sys=5.77%, ctx=10, majf=0, minf=157 00:45:19.060 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:19.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:19.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:19.060 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:19.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:19.060 00:45:19.060 Run status group 0 (all jobs): 00:45:19.060 READ: bw=105MiB/s (110MB/s), 18.0MiB/s-47.9MiB/s (18.9MB/s-50.2MB/s), io=531MiB (557MB), run=5003-5047msec 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 bdev_null0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 [2024-10-01 16:02:57.480703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 bdev_null1 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:19.060 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.061 bdev_null2 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:19.061 { 00:45:19.061 "params": { 00:45:19.061 "name": "Nvme$subsystem", 00:45:19.061 "trtype": "$TEST_TRANSPORT", 00:45:19.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:19.061 "adrfam": "ipv4", 00:45:19.061 "trsvcid": "$NVMF_PORT", 00:45:19.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:19.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:19.061 "hdgst": ${hdgst:-false}, 00:45:19.061 "ddgst": ${ddgst:-false} 00:45:19.061 }, 00:45:19.061 "method": "bdev_nvme_attach_controller" 00:45:19.061 } 00:45:19.061 EOF 00:45:19.061 )") 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:19.061 { 00:45:19.061 "params": { 00:45:19.061 "name": "Nvme$subsystem", 00:45:19.061 "trtype": "$TEST_TRANSPORT", 00:45:19.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:19.061 "adrfam": "ipv4", 00:45:19.061 "trsvcid": "$NVMF_PORT", 00:45:19.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:19.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:19.061 "hdgst": ${hdgst:-false}, 00:45:19.061 "ddgst": ${ddgst:-false} 00:45:19.061 }, 00:45:19.061 "method": "bdev_nvme_attach_controller" 00:45:19.061 } 00:45:19.061 EOF 00:45:19.061 )") 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:19.061 { 00:45:19.061 "params": { 00:45:19.061 "name": "Nvme$subsystem", 00:45:19.061 "trtype": "$TEST_TRANSPORT", 00:45:19.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:19.061 "adrfam": "ipv4", 00:45:19.061 "trsvcid": "$NVMF_PORT", 00:45:19.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:19.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:19.061 "hdgst": ${hdgst:-false}, 00:45:19.061 "ddgst": ${ddgst:-false} 00:45:19.061 }, 00:45:19.061 "method": "bdev_nvme_attach_controller" 00:45:19.061 } 00:45:19.061 EOF 00:45:19.061 )") 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:19.061 "params": { 00:45:19.061 "name": "Nvme0", 00:45:19.061 "trtype": "tcp", 00:45:19.061 "traddr": "10.0.0.2", 00:45:19.061 "adrfam": "ipv4", 00:45:19.061 "trsvcid": "4420", 00:45:19.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:19.061 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:19.061 "hdgst": false, 00:45:19.061 "ddgst": false 00:45:19.061 }, 00:45:19.061 "method": "bdev_nvme_attach_controller" 00:45:19.061 },{ 00:45:19.061 "params": { 00:45:19.061 "name": "Nvme1", 00:45:19.061 "trtype": "tcp", 00:45:19.061 "traddr": "10.0.0.2", 00:45:19.061 "adrfam": "ipv4", 00:45:19.061 "trsvcid": "4420", 00:45:19.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:19.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:19.061 "hdgst": false, 00:45:19.061 "ddgst": false 00:45:19.061 }, 00:45:19.061 "method": "bdev_nvme_attach_controller" 00:45:19.061 },{ 00:45:19.061 "params": { 00:45:19.061 "name": "Nvme2", 00:45:19.061 "trtype": "tcp", 00:45:19.061 "traddr": "10.0.0.2", 00:45:19.061 "adrfam": "ipv4", 00:45:19.061 "trsvcid": "4420", 00:45:19.061 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:19.061 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:19.061 "hdgst": false, 00:45:19.061 "ddgst": false 00:45:19.061 }, 00:45:19.061 "method": "bdev_nvme_attach_controller" 00:45:19.061 }' 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:19.061 16:02:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:19.061 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:19.061 ... 00:45:19.061 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:19.061 ... 00:45:19.061 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:19.061 ... 00:45:19.061 fio-3.35 00:45:19.061 Starting 24 threads 00:45:31.291 00:45:31.291 filename0: (groupid=0, jobs=1): err= 0: pid=3529585: Tue Oct 1 16:03:09 2024 00:45:31.291 read: IOPS=634, BW=2537KiB/s (2598kB/s)(24.8MiB/10008msec) 00:45:31.291 slat (usec): min=5, max=124, avg=28.58, stdev=22.55 00:45:31.291 clat (msec): min=6, max=222, avg=24.99, stdev=15.16 00:45:31.291 lat (msec): min=6, max=222, avg=25.02, stdev=15.16 00:45:31.291 clat percentiles (msec): 00:45:31.291 | 1.00th=[ 20], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.291 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.291 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.291 | 99.00th=[ 41], 99.50th=[ 186], 99.90th=[ 222], 99.95th=[ 222], 00:45:31.291 | 99.99th=[ 222] 00:45:31.291 bw ( KiB/s): min= 640, max= 2784, per=4.15%, avg=2531.37, stdev=492.16, samples=19 00:45:31.291 iops : min= 160, max= 696, avg=632.84, stdev=123.04, samples=19 00:45:31.291 lat (msec) : 10=0.50%, 20=0.58%, 50=98.16%, 250=0.76% 00:45:31.291 cpu : usr=98.60%, sys=0.87%, ctx=94, majf=0, minf=24 00:45:31.291 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:31.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.291 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.291 issued rwts: total=6348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.291 filename0: (groupid=0, jobs=1): err= 0: pid=3529586: Tue Oct 1 16:03:09 2024 00:45:31.291 read: IOPS=647, BW=2591KiB/s (2653kB/s)(25.4MiB/10026msec) 00:45:31.291 slat (usec): min=5, max=148, avg=23.73, stdev=23.25 00:45:31.291 clat (msec): min=5, max=317, avg=24.51, stdev=17.88 00:45:31.291 lat (msec): min=5, max=317, avg=24.54, stdev=17.88 00:45:31.291 clat percentiles (msec): 00:45:31.291 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 18], 20.00th=[ 22], 00:45:31.291 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.291 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 32], 00:45:31.291 | 99.00th=[ 42], 99.50th=[ 186], 99.90th=[ 317], 99.95th=[ 317], 00:45:31.291 | 99.99th=[ 317] 00:45:31.291 bw ( KiB/s): min= 256, max= 2880, per=4.26%, avg=2593.60, stdev=569.74, samples=20 00:45:31.291 iops : min= 64, max= 720, avg=648.40, stdev=142.44, samples=20 00:45:31.291 lat (msec) : 10=0.68%, 20=14.06%, 50=84.52%, 250=0.52%, 500=0.22% 00:45:31.291 cpu : usr=98.85%, sys=0.81%, ctx=86, majf=0, minf=16 00:45:31.291 IO depths : 1=2.2%, 2=4.4%, 4=12.2%, 8=69.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:45:31.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.291 complete : 0=0.0%, 4=90.8%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.291 issued rwts: total=6494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.291 filename0: (groupid=0, jobs=1): err= 0: pid=3529587: Tue Oct 1 16:03:09 2024 00:45:31.291 read: IOPS=629, BW=2519KiB/s (2579kB/s)(24.6MiB/10012msec) 00:45:31.291 slat (usec): min=3, max=155, avg=40.57, stdev=21.77 00:45:31.291 clat (msec): min=12, max=406, avg=25.04, stdev=19.13 00:45:31.291 lat (msec): min=13, max=406, avg=25.08, stdev=19.13 00:45:31.291 clat percentiles (msec): 00:45:31.291 | 1.00th=[ 23], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.291 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.291 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.291 | 99.00th=[ 26], 99.50th=[ 228], 99.90th=[ 330], 99.95th=[ 330], 00:45:31.291 | 99.99th=[ 405] 00:45:31.291 bw ( KiB/s): min= 128, max= 2688, per=4.13%, avg=2515.20, stdev=574.35, samples=20 00:45:31.291 iops : min= 32, max= 672, avg=628.80, stdev=143.59, samples=20 00:45:31.291 lat (msec) : 20=0.06%, 50=99.21%, 100=0.22%, 250=0.25%, 500=0.25% 00:45:31.291 cpu : usr=98.98%, sys=0.72%, ctx=19, majf=0, minf=18 00:45:31.291 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:31.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.291 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.291 issued rwts: total=6304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.291 filename0: (groupid=0, jobs=1): err= 0: pid=3529588: Tue Oct 1 16:03:09 2024 00:45:31.291 read: IOPS=634, BW=2539KiB/s (2600kB/s)(24.8MiB/10009msec) 00:45:31.291 slat (usec): min=5, max=118, avg=32.66, stdev=17.69 00:45:31.291 clat (msec): min=9, max=515, avg=24.93, stdev=24.89 00:45:31.291 lat (msec): min=9, max=515, avg=24.96, stdev=24.89 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.292 | 99.00th=[ 27], 99.50th=[ 82], 99.90th=[ 514], 99.95th=[ 514], 00:45:31.292 | 99.99th=[ 514] 00:45:31.292 bw ( KiB/s): min= 2448, max= 2688, per=4.37%, avg=2660.44, stdev=67.17, samples=18 00:45:31.292 iops : min= 612, max= 672, avg=665.11, stdev=16.79, samples=18 00:45:31.292 lat (msec) : 10=0.25%, 20=1.73%, 50=97.51%, 100=0.25%, 750=0.25% 00:45:31.292 cpu : usr=99.05%, sys=0.64%, ctx=68, majf=0, minf=20 00:45:31.292 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 issued rwts: total=6354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.292 filename0: (groupid=0, jobs=1): err= 0: pid=3529589: Tue Oct 1 16:03:09 2024 00:45:31.292 read: IOPS=632, BW=2529KiB/s (2590kB/s)(24.7MiB/10008msec) 00:45:31.292 slat (usec): min=5, max=132, avg=37.18, stdev=21.09 00:45:31.292 clat (msec): min=6, max=592, avg=24.96, stdev=28.61 00:45:31.292 lat (msec): min=6, max=592, avg=25.00, stdev=28.61 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 15], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.292 | 99.00th=[ 30], 99.50th=[ 37], 99.90th=[ 592], 99.95th=[ 592], 00:45:31.292 | 99.99th=[ 592] 00:45:31.292 bw ( KiB/s): min= 2176, max= 2736, per=4.35%, avg=2648.89, stdev=128.50, samples=18 00:45:31.292 iops : min= 544, max= 684, avg=662.22, stdev=32.12, samples=18 00:45:31.292 lat (msec) : 10=0.35%, 20=1.82%, 50=97.58%, 750=0.25% 00:45:31.292 cpu : usr=98.58%, sys=0.96%, ctx=64, majf=0, minf=16 00:45:31.292 IO depths : 1=5.6%, 2=11.6%, 4=24.0%, 8=51.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 issued rwts: total=6328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.292 filename0: (groupid=0, jobs=1): err= 0: pid=3529590: Tue Oct 1 16:03:09 2024 00:45:31.292 read: IOPS=631, BW=2525KiB/s (2585kB/s)(24.7MiB/10013msec) 00:45:31.292 slat (usec): min=5, max=134, avg=15.88, stdev= 8.78 00:45:31.292 clat (msec): min=15, max=370, avg=25.22, stdev=18.22 00:45:31.292 lat (msec): min=15, max=371, avg=25.23, stdev=18.22 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.292 | 99.00th=[ 27], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:45:31.292 | 99.99th=[ 372] 00:45:31.292 bw ( KiB/s): min= 128, max= 2688, per=4.12%, avg=2512.84, stdev=589.99, samples=19 00:45:31.292 iops : min= 32, max= 672, avg=628.21, stdev=147.50, samples=19 00:45:31.292 lat (msec) : 20=0.16%, 50=99.11%, 100=0.22%, 250=0.03%, 500=0.47% 00:45:31.292 cpu : usr=98.96%, sys=0.75%, ctx=17, majf=0, minf=14 00:45:31.292 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 issued rwts: total=6320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.292 filename0: (groupid=0, jobs=1): err= 0: pid=3529591: Tue Oct 1 16:03:09 2024 00:45:31.292 read: IOPS=634, BW=2536KiB/s (2597kB/s)(24.8MiB/10017msec) 00:45:31.292 slat (usec): min=5, max=126, avg=10.47, stdev=11.04 00:45:31.292 clat (msec): min=4, max=261, avg=25.15, stdev=15.15 00:45:31.292 lat (msec): min=4, max=261, avg=25.16, stdev=15.15 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.292 | 99.00th=[ 83], 99.50th=[ 163], 99.90th=[ 262], 99.95th=[ 262], 00:45:31.292 | 99.99th=[ 262] 00:45:31.292 bw ( KiB/s): min= 256, max= 2816, per=4.16%, avg=2534.40, stdev=550.31, samples=20 00:45:31.292 iops : min= 64, max= 704, avg=633.60, stdev=137.58, samples=20 00:45:31.292 lat (msec) : 10=0.50%, 20=0.03%, 50=98.46%, 100=0.25%, 250=0.50% 00:45:31.292 lat (msec) : 500=0.25% 00:45:31.292 cpu : usr=98.93%, sys=0.68%, ctx=49, majf=0, minf=34 00:45:31.292 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 issued rwts: total=6352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.292 filename0: (groupid=0, jobs=1): err= 0: pid=3529592: Tue Oct 1 16:03:09 2024 00:45:31.292 read: IOPS=634, BW=2537KiB/s (2598kB/s)(24.8MiB/10010msec) 00:45:31.292 slat (usec): min=5, max=130, avg=37.86, stdev=21.25 00:45:31.292 clat (msec): min=9, max=490, avg=24.87, stdev=21.40 00:45:31.292 lat (msec): min=9, max=490, avg=24.91, stdev=21.40 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.292 | 99.00th=[ 26], 99.50th=[ 109], 99.90th=[ 409], 99.95th=[ 409], 00:45:31.292 | 99.99th=[ 489] 00:45:31.292 bw ( KiB/s): min= 128, max= 2792, per=4.13%, avg=2518.32, stdev=592.18, samples=19 00:45:31.292 iops : min= 32, max= 698, avg=629.58, stdev=148.05, samples=19 00:45:31.292 lat (msec) : 10=0.25%, 20=0.91%, 50=98.33%, 250=0.25%, 500=0.25% 00:45:31.292 cpu : usr=98.74%, sys=0.83%, ctx=60, majf=0, minf=19 00:45:31.292 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 issued rwts: total=6349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.292 filename1: (groupid=0, jobs=1): err= 0: pid=3529594: Tue Oct 1 16:03:09 2024 00:45:31.292 read: IOPS=633, BW=2533KiB/s (2593kB/s)(24.8MiB/10007msec) 00:45:31.292 slat (usec): min=5, max=131, avg=40.01, stdev=21.00 00:45:31.292 clat (msec): min=9, max=514, avg=24.88, stdev=24.85 00:45:31.292 lat (msec): min=9, max=514, avg=24.92, stdev=24.84 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 23], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.292 | 99.00th=[ 26], 99.50th=[ 82], 99.90th=[ 514], 99.95th=[ 514], 00:45:31.292 | 99.99th=[ 514] 00:45:31.292 bw ( KiB/s): min= 2304, max= 2688, per=4.35%, avg=2652.44, stdev=96.24, samples=18 00:45:31.292 iops : min= 576, max= 672, avg=663.11, stdev=24.06, samples=18 00:45:31.292 lat (msec) : 10=0.25%, 20=0.51%, 50=98.74%, 100=0.25%, 750=0.25% 00:45:31.292 cpu : usr=98.96%, sys=0.74%, ctx=41, majf=0, minf=25 00:45:31.292 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 issued rwts: total=6336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.292 filename1: (groupid=0, jobs=1): err= 0: pid=3529595: Tue Oct 1 16:03:09 2024 00:45:31.292 read: IOPS=633, BW=2532KiB/s (2593kB/s)(24.8MiB/10009msec) 00:45:31.292 slat (usec): min=5, max=131, avg=39.27, stdev=20.76 00:45:31.292 clat (msec): min=8, max=592, avg=24.93, stdev=25.40 00:45:31.292 lat (msec): min=8, max=592, avg=24.97, stdev=25.40 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.292 | 99.00th=[ 30], 99.50th=[ 32], 99.90th=[ 518], 99.95th=[ 518], 00:45:31.292 | 99.99th=[ 592] 00:45:31.292 bw ( KiB/s): min= 2304, max= 2688, per=4.35%, avg=2652.44, stdev=96.24, samples=18 00:45:31.292 iops : min= 576, max= 672, avg=663.11, stdev=24.06, samples=18 00:45:31.292 lat (msec) : 10=0.28%, 20=1.20%, 50=98.04%, 100=0.22%, 750=0.25% 00:45:31.292 cpu : usr=98.75%, sys=0.77%, ctx=57, majf=0, minf=21 00:45:31.292 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 issued rwts: total=6336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.292 filename1: (groupid=0, jobs=1): err= 0: pid=3529596: Tue Oct 1 16:03:09 2024 00:45:31.292 read: IOPS=623, BW=2492KiB/s (2552kB/s)(24.4MiB/10008msec) 00:45:31.292 slat (usec): min=5, max=141, avg=27.70, stdev=22.42 00:45:31.292 clat (msec): min=9, max=592, avg=25.46, stdev=25.75 00:45:31.292 lat (msec): min=9, max=592, avg=25.49, stdev=25.75 00:45:31.292 clat percentiles (msec): 00:45:31.292 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.292 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.292 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 31], 00:45:31.292 | 99.00th=[ 38], 99.50th=[ 41], 99.90th=[ 518], 99.95th=[ 518], 00:45:31.292 | 99.99th=[ 592] 00:45:31.292 bw ( KiB/s): min= 2192, max= 2736, per=4.29%, avg=2613.33, stdev=119.98, samples=18 00:45:31.292 iops : min= 548, max= 684, avg=653.33, stdev=30.00, samples=18 00:45:31.292 lat (msec) : 10=0.26%, 20=3.70%, 50=95.56%, 100=0.22%, 750=0.26% 00:45:31.292 cpu : usr=99.09%, sys=0.62%, ctx=12, majf=0, minf=20 00:45:31.292 IO depths : 1=2.5%, 2=5.7%, 4=14.4%, 8=65.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:45:31.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.292 complete : 0=0.0%, 4=91.6%, 8=4.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename1: (groupid=0, jobs=1): err= 0: pid=3529597: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=632, BW=2530KiB/s (2591kB/s)(24.8MiB/10017msec) 00:45:31.293 slat (usec): min=5, max=119, avg=20.99, stdev=17.55 00:45:31.293 clat (msec): min=9, max=261, avg=25.13, stdev=15.36 00:45:31.293 lat (msec): min=9, max=261, avg=25.15, stdev=15.36 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.293 | 99.00th=[ 27], 99.50th=[ 163], 99.90th=[ 262], 99.95th=[ 262], 00:45:31.293 | 99.99th=[ 262] 00:45:31.293 bw ( KiB/s): min= 256, max= 2688, per=4.15%, avg=2528.00, stdev=547.60, samples=20 00:45:31.293 iops : min= 64, max= 672, avg=632.00, stdev=136.90, samples=20 00:45:31.293 lat (msec) : 10=0.25%, 20=0.06%, 50=98.71%, 100=0.22%, 250=0.51% 00:45:31.293 lat (msec) : 500=0.25% 00:45:31.293 cpu : usr=97.73%, sys=1.51%, ctx=503, majf=0, minf=24 00:45:31.293 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:31.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename1: (groupid=0, jobs=1): err= 0: pid=3529598: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=633, BW=2533KiB/s (2594kB/s)(24.8MiB/10006msec) 00:45:31.293 slat (nsec): min=5565, max=69147, avg=12624.46, stdev=9299.13 00:45:31.293 clat (msec): min=5, max=285, avg=25.17, stdev=16.22 00:45:31.293 lat (msec): min=5, max=285, avg=25.18, stdev=16.22 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.293 | 99.00th=[ 27], 99.50th=[ 218], 99.90th=[ 222], 99.95th=[ 222], 00:45:31.293 | 99.99th=[ 284] 00:45:31.293 bw ( KiB/s): min= 240, max= 2688, per=4.14%, avg=2519.58, stdev=570.55, samples=19 00:45:31.293 iops : min= 60, max= 672, avg=629.89, stdev=142.64, samples=19 00:45:31.293 lat (msec) : 10=0.51%, 20=0.06%, 50=98.67%, 250=0.73%, 500=0.03% 00:45:31.293 cpu : usr=98.64%, sys=0.91%, ctx=113, majf=0, minf=24 00:45:31.293 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:31.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename1: (groupid=0, jobs=1): err= 0: pid=3529599: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=631, BW=2524KiB/s (2585kB/s)(24.7MiB/10014msec) 00:45:31.293 slat (usec): min=5, max=132, avg=27.02, stdev=20.14 00:45:31.293 clat (msec): min=15, max=381, avg=25.10, stdev=18.27 00:45:31.293 lat (msec): min=15, max=381, avg=25.13, stdev=18.26 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 23], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.293 | 99.00th=[ 26], 99.50th=[ 236], 99.90th=[ 284], 99.95th=[ 284], 00:45:31.293 | 99.99th=[ 384] 00:45:31.293 bw ( KiB/s): min= 128, max= 2688, per=4.12%, avg=2512.84, stdev=589.99, samples=19 00:45:31.293 iops : min= 32, max= 672, avg=628.21, stdev=147.50, samples=19 00:45:31.293 lat (msec) : 20=0.06%, 50=99.21%, 100=0.22%, 250=0.03%, 500=0.47% 00:45:31.293 cpu : usr=98.83%, sys=0.77%, ctx=60, majf=0, minf=18 00:45:31.293 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:31.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename1: (groupid=0, jobs=1): err= 0: pid=3529600: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=630, BW=2521KiB/s (2581kB/s)(24.6MiB/10006msec) 00:45:31.293 slat (usec): min=5, max=122, avg=25.24, stdev=19.94 00:45:31.293 clat (msec): min=7, max=338, avg=25.18, stdev=20.30 00:45:31.293 lat (msec): min=7, max=338, avg=25.21, stdev=20.30 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 30], 00:45:31.293 | 99.00th=[ 33], 99.50th=[ 271], 99.90th=[ 338], 99.95th=[ 338], 00:45:31.293 | 99.99th=[ 338] 00:45:31.293 bw ( KiB/s): min= 128, max= 2768, per=4.12%, avg=2507.79, stdev=589.47, samples=19 00:45:31.293 iops : min= 32, max= 692, avg=626.95, stdev=147.37, samples=19 00:45:31.293 lat (msec) : 10=0.16%, 20=4.90%, 50=94.43%, 500=0.51% 00:45:31.293 cpu : usr=98.72%, sys=0.98%, ctx=12, majf=0, minf=28 00:45:31.293 IO depths : 1=3.5%, 2=6.9%, 4=14.9%, 8=64.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:45:31.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 complete : 0=0.0%, 4=91.6%, 8=4.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename1: (groupid=0, jobs=1): err= 0: pid=3529601: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=633, BW=2535KiB/s (2596kB/s)(24.8MiB/10016msec) 00:45:31.293 slat (usec): min=5, max=132, avg=25.18, stdev=22.45 00:45:31.293 clat (msec): min=12, max=334, avg=25.03, stdev=18.18 00:45:31.293 lat (msec): min=12, max=334, avg=25.05, stdev=18.18 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.293 | 99.00th=[ 29], 99.50th=[ 262], 99.90th=[ 284], 99.95th=[ 284], 00:45:31.293 | 99.99th=[ 334] 00:45:31.293 bw ( KiB/s): min= 128, max= 2816, per=4.15%, avg=2530.90, stdev=579.35, samples=20 00:45:31.293 iops : min= 32, max= 704, avg=632.70, stdev=144.83, samples=20 00:45:31.293 lat (msec) : 20=2.08%, 50=97.20%, 100=0.22%, 500=0.50% 00:45:31.293 cpu : usr=98.80%, sys=0.88%, ctx=57, majf=0, minf=26 00:45:31.293 IO depths : 1=5.6%, 2=11.4%, 4=23.6%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:31.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename2: (groupid=0, jobs=1): err= 0: pid=3529602: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=633, BW=2532KiB/s (2593kB/s)(24.8MiB/10009msec) 00:45:31.293 slat (usec): min=5, max=135, avg=38.44, stdev=20.91 00:45:31.293 clat (msec): min=9, max=516, avg=24.91, stdev=24.92 00:45:31.293 lat (msec): min=9, max=516, avg=24.94, stdev=24.92 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 21], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.293 | 99.00th=[ 26], 99.50th=[ 81], 99.90th=[ 518], 99.95th=[ 518], 00:45:31.293 | 99.99th=[ 518] 00:45:31.293 bw ( KiB/s): min= 2304, max= 2688, per=4.35%, avg=2652.44, stdev=96.24, samples=18 00:45:31.293 iops : min= 576, max= 672, avg=663.11, stdev=24.06, samples=18 00:45:31.293 lat (msec) : 10=0.25%, 20=0.76%, 50=98.48%, 100=0.25%, 750=0.25% 00:45:31.293 cpu : usr=99.07%, sys=0.63%, ctx=13, majf=0, minf=30 00:45:31.293 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:31.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename2: (groupid=0, jobs=1): err= 0: pid=3529603: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=632, BW=2532KiB/s (2592kB/s)(24.7MiB/10008msec) 00:45:31.293 slat (usec): min=5, max=135, avg=35.31, stdev=21.60 00:45:31.293 clat (msec): min=5, max=592, avg=24.97, stdev=28.62 00:45:31.293 lat (msec): min=5, max=592, avg=25.00, stdev=28.62 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 14], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.293 | 99.00th=[ 33], 99.50th=[ 36], 99.90th=[ 592], 99.95th=[ 592], 00:45:31.293 | 99.99th=[ 592] 00:45:31.293 bw ( KiB/s): min= 2192, max= 2848, per=4.35%, avg=2651.56, stdev=128.92, samples=18 00:45:31.293 iops : min= 548, max= 712, avg=662.89, stdev=32.23, samples=18 00:45:31.293 lat (msec) : 10=0.33%, 20=2.24%, 50=97.17%, 750=0.25% 00:45:31.293 cpu : usr=98.95%, sys=0.73%, ctx=54, majf=0, minf=20 00:45:31.293 IO depths : 1=4.1%, 2=9.5%, 4=22.7%, 8=55.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:45:31.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.293 issued rwts: total=6334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.293 filename2: (groupid=0, jobs=1): err= 0: pid=3529604: Tue Oct 1 16:03:09 2024 00:45:31.293 read: IOPS=629, BW=2518KiB/s (2579kB/s)(24.6MiB/10010msec) 00:45:31.293 slat (usec): min=5, max=146, avg=20.12, stdev=21.70 00:45:31.293 clat (msec): min=8, max=517, avg=25.34, stdev=25.15 00:45:31.293 lat (msec): min=8, max=517, avg=25.36, stdev=25.15 00:45:31.293 clat percentiles (msec): 00:45:31.293 | 1.00th=[ 15], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.293 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.293 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:45:31.293 | 99.00th=[ 35], 99.50th=[ 89], 99.90th=[ 518], 99.95th=[ 518], 00:45:31.293 | 99.99th=[ 518] 00:45:31.293 bw ( KiB/s): min= 2192, max= 2688, per=4.33%, avg=2637.33, stdev=116.16, samples=18 00:45:31.293 iops : min= 548, max= 672, avg=659.33, stdev=29.04, samples=18 00:45:31.294 lat (msec) : 10=0.32%, 20=1.30%, 50=97.87%, 100=0.16%, 250=0.10% 00:45:31.294 lat (msec) : 750=0.25% 00:45:31.294 cpu : usr=98.72%, sys=0.81%, ctx=67, majf=0, minf=34 00:45:31.294 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=80.7%, 16=18.1%, 32=0.0%, >=64=0.0% 00:45:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 complete : 0=0.0%, 4=89.5%, 8=10.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 issued rwts: total=6302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.294 filename2: (groupid=0, jobs=1): err= 0: pid=3529606: Tue Oct 1 16:03:09 2024 00:45:31.294 read: IOPS=631, BW=2528KiB/s (2589kB/s)(24.7MiB/10010msec) 00:45:31.294 slat (usec): min=5, max=129, avg=31.61, stdev=20.60 00:45:31.294 clat (msec): min=9, max=408, avg=25.04, stdev=21.01 00:45:31.294 lat (msec): min=9, max=408, avg=25.08, stdev=21.01 00:45:31.294 clat percentiles (msec): 00:45:31.294 | 1.00th=[ 23], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.294 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.294 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.294 | 99.00th=[ 27], 99.50th=[ 186], 99.90th=[ 409], 99.95th=[ 409], 00:45:31.294 | 99.99th=[ 409] 00:45:31.294 bw ( KiB/s): min= 128, max= 2688, per=4.12%, avg=2508.63, stdev=588.96, samples=19 00:45:31.294 iops : min= 32, max= 672, avg=627.16, stdev=147.24, samples=19 00:45:31.294 lat (msec) : 10=0.25%, 20=0.54%, 50=98.70%, 250=0.25%, 500=0.25% 00:45:31.294 cpu : usr=99.09%, sys=0.57%, ctx=84, majf=0, minf=24 00:45:31.294 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 issued rwts: total=6326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.294 filename2: (groupid=0, jobs=1): err= 0: pid=3529607: Tue Oct 1 16:03:09 2024 00:45:31.294 read: IOPS=636, BW=2545KiB/s (2606kB/s)(24.9MiB/10019msec) 00:45:31.294 slat (usec): min=5, max=125, avg=22.24, stdev=22.04 00:45:31.294 clat (msec): min=5, max=285, avg=24.96, stdev=18.35 00:45:31.294 lat (msec): min=5, max=285, avg=24.98, stdev=18.35 00:45:31.294 clat percentiles (msec): 00:45:31.294 | 1.00th=[ 13], 5.00th=[ 17], 10.00th=[ 21], 20.00th=[ 23], 00:45:31.294 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.294 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 31], 00:45:31.294 | 99.00th=[ 43], 99.50th=[ 262], 99.90th=[ 288], 99.95th=[ 288], 00:45:31.294 | 99.99th=[ 288] 00:45:31.294 bw ( KiB/s): min= 128, max= 2944, per=4.18%, avg=2546.00, stdev=589.22, samples=20 00:45:31.294 iops : min= 32, max= 736, avg=636.50, stdev=147.30, samples=20 00:45:31.294 lat (msec) : 10=0.60%, 20=9.41%, 50=89.24%, 100=0.25%, 500=0.50% 00:45:31.294 cpu : usr=98.95%, sys=0.74%, ctx=53, majf=0, minf=25 00:45:31.294 IO depths : 1=2.3%, 2=4.7%, 4=11.8%, 8=69.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:45:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 complete : 0=0.0%, 4=90.9%, 8=5.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 issued rwts: total=6375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.294 filename2: (groupid=0, jobs=1): err= 0: pid=3529608: Tue Oct 1 16:03:09 2024 00:45:31.294 read: IOPS=694, BW=2779KiB/s (2846kB/s)(27.2MiB/10016msec) 00:45:31.294 slat (usec): min=5, max=128, avg=15.67, stdev=19.27 00:45:31.294 clat (msec): min=4, max=261, avg=22.91, stdev=15.40 00:45:31.294 lat (msec): min=4, max=261, avg=22.93, stdev=15.40 00:45:31.294 clat percentiles (msec): 00:45:31.294 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 20], 00:45:31.294 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.294 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.294 | 99.00th=[ 40], 99.50th=[ 131], 99.90th=[ 262], 99.95th=[ 262], 00:45:31.294 | 99.99th=[ 262] 00:45:31.294 bw ( KiB/s): min= 256, max= 3568, per=4.56%, avg=2777.20, stdev=687.36, samples=20 00:45:31.294 iops : min= 64, max= 892, avg=694.30, stdev=171.84, samples=20 00:45:31.294 lat (msec) : 10=3.38%, 20=16.78%, 50=78.95%, 100=0.20%, 250=0.49% 00:45:31.294 lat (msec) : 500=0.20% 00:45:31.294 cpu : usr=98.89%, sys=0.80%, ctx=27, majf=0, minf=26 00:45:31.294 IO depths : 1=3.6%, 2=7.5%, 4=17.5%, 8=62.2%, 16=9.2%, 32=0.0%, >=64=0.0% 00:45:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 issued rwts: total=6959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.294 filename2: (groupid=0, jobs=1): err= 0: pid=3529609: Tue Oct 1 16:03:09 2024 00:45:31.294 read: IOPS=631, BW=2526KiB/s (2586kB/s)(24.7MiB/10009msec) 00:45:31.294 slat (usec): min=5, max=167, avg=31.12, stdev=30.80 00:45:31.294 clat (msec): min=15, max=379, avg=25.05, stdev=18.03 00:45:31.294 lat (msec): min=15, max=379, avg=25.08, stdev=18.03 00:45:31.294 clat percentiles (msec): 00:45:31.294 | 1.00th=[ 23], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:45:31.294 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.294 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.294 | 99.00th=[ 26], 99.50th=[ 230], 99.90th=[ 275], 99.95th=[ 275], 00:45:31.294 | 99.99th=[ 380] 00:45:31.294 bw ( KiB/s): min= 128, max= 2688, per=4.14%, avg=2521.60, stdev=571.08, samples=20 00:45:31.294 iops : min= 32, max= 672, avg=630.40, stdev=142.77, samples=20 00:45:31.294 lat (msec) : 20=0.32%, 50=98.96%, 100=0.22%, 250=0.03%, 500=0.47% 00:45:31.294 cpu : usr=98.90%, sys=0.71%, ctx=30, majf=0, minf=28 00:45:31.294 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 issued rwts: total=6320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.294 filename2: (groupid=0, jobs=1): err= 0: pid=3529610: Tue Oct 1 16:03:09 2024 00:45:31.294 read: IOPS=632, BW=2529KiB/s (2589kB/s)(24.8MiB/10023msec) 00:45:31.294 slat (nsec): min=5564, max=72890, avg=16870.35, stdev=11976.67 00:45:31.294 clat (msec): min=7, max=222, avg=25.16, stdev=16.20 00:45:31.294 lat (msec): min=7, max=222, avg=25.17, stdev=16.20 00:45:31.294 clat percentiles (msec): 00:45:31.294 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:45:31.294 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:45:31.294 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:45:31.294 | 99.00th=[ 27], 99.50th=[ 222], 99.90th=[ 222], 99.95th=[ 222], 00:45:31.294 | 99.99th=[ 222] 00:45:31.294 bw ( KiB/s): min= 255, max= 2816, per=4.15%, avg=2529.20, stdev=555.90, samples=20 00:45:31.294 iops : min= 63, max= 704, avg=632.20, stdev=139.12, samples=20 00:45:31.294 lat (msec) : 10=0.25%, 20=0.03%, 50=98.96%, 250=0.76% 00:45:31.294 cpu : usr=99.06%, sys=0.63%, ctx=56, majf=0, minf=19 00:45:31.294 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.294 issued rwts: total=6336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:31.294 00:45:31.294 Run status group 0 (all jobs): 00:45:31.294 READ: bw=59.5MiB/s (62.4MB/s), 2492KiB/s-2779KiB/s (2552kB/s-2846kB/s), io=596MiB (625MB), run=10006-10026msec 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:31.294 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 bdev_null0 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 [2024-10-01 16:03:09.347952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 bdev_null1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:31.295 { 00:45:31.295 "params": { 00:45:31.295 "name": "Nvme$subsystem", 00:45:31.295 "trtype": "$TEST_TRANSPORT", 00:45:31.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:31.295 "adrfam": "ipv4", 00:45:31.295 "trsvcid": "$NVMF_PORT", 00:45:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:31.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:31.295 "hdgst": ${hdgst:-false}, 00:45:31.295 "ddgst": ${ddgst:-false} 00:45:31.295 }, 00:45:31.295 "method": "bdev_nvme_attach_controller" 00:45:31.295 } 00:45:31.295 EOF 00:45:31.295 )") 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:31.295 { 00:45:31.295 "params": { 00:45:31.295 "name": "Nvme$subsystem", 00:45:31.295 "trtype": "$TEST_TRANSPORT", 00:45:31.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:31.295 "adrfam": "ipv4", 00:45:31.295 "trsvcid": "$NVMF_PORT", 00:45:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:31.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:31.295 "hdgst": ${hdgst:-false}, 00:45:31.295 "ddgst": ${ddgst:-false} 00:45:31.295 }, 00:45:31.295 "method": "bdev_nvme_attach_controller" 00:45:31.295 } 00:45:31.295 EOF 00:45:31.295 )") 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:31.295 "params": { 00:45:31.295 "name": "Nvme0", 00:45:31.295 "trtype": "tcp", 00:45:31.295 "traddr": "10.0.0.2", 00:45:31.295 "adrfam": "ipv4", 00:45:31.295 "trsvcid": "4420", 00:45:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:31.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:31.295 "hdgst": false, 00:45:31.295 "ddgst": false 00:45:31.295 }, 00:45:31.295 "method": "bdev_nvme_attach_controller" 00:45:31.295 },{ 00:45:31.295 "params": { 00:45:31.295 "name": "Nvme1", 00:45:31.295 "trtype": "tcp", 00:45:31.295 "traddr": "10.0.0.2", 00:45:31.295 "adrfam": "ipv4", 00:45:31.295 "trsvcid": "4420", 00:45:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:31.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:31.295 "hdgst": false, 00:45:31.295 "ddgst": false 00:45:31.295 }, 00:45:31.295 "method": "bdev_nvme_attach_controller" 00:45:31.295 }' 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:31.295 16:03:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:31.295 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:31.295 ... 00:45:31.295 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:31.295 ... 00:45:31.296 fio-3.35 00:45:31.296 Starting 4 threads 00:45:36.587 00:45:36.587 filename0: (groupid=0, jobs=1): err= 0: pid=3532365: Tue Oct 1 16:03:15 2024 00:45:36.587 read: IOPS=2908, BW=22.7MiB/s (23.8MB/s)(114MiB/5001msec) 00:45:36.587 slat (nsec): min=5400, max=35227, avg=6330.59, stdev=1618.23 00:45:36.587 clat (usec): min=1660, max=45284, avg=2733.89, stdev=1024.43 00:45:36.587 lat (usec): min=1666, max=45316, avg=2740.22, stdev=1024.57 00:45:36.587 clat percentiles (usec): 00:45:36.587 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:45:36.587 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:45:36.587 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 2933], 95.00th=[ 2966], 00:45:36.587 | 99.00th=[ 3851], 99.50th=[ 4080], 99.90th=[ 4490], 99.95th=[45351], 00:45:36.587 | 99.99th=[45351] 00:45:36.587 bw ( KiB/s): min=21344, max=23728, per=24.56%, avg=23256.89, stdev=735.40, samples=9 00:45:36.588 iops : min= 2668, max= 2966, avg=2907.11, stdev=91.92, samples=9 00:45:36.588 lat (msec) : 2=0.32%, 4=99.07%, 10=0.55%, 50=0.06% 00:45:36.588 cpu : usr=96.94%, sys=2.80%, ctx=6, majf=0, minf=36 00:45:36.588 IO depths : 1=0.1%, 2=0.1%, 4=71.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 issued rwts: total=14543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.588 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.588 filename0: (groupid=0, jobs=1): err= 0: pid=3532366: Tue Oct 1 16:03:15 2024 00:45:36.588 read: IOPS=3028, BW=23.7MiB/s (24.8MB/s)(118MiB/5001msec) 00:45:36.588 slat (nsec): min=5407, max=61514, avg=6026.81, stdev=1561.07 00:45:36.588 clat (usec): min=617, max=4431, avg=2625.64, stdev=383.96 00:45:36.588 lat (usec): min=623, max=4437, avg=2631.67, stdev=383.86 00:45:36.588 clat percentiles (usec): 00:45:36.588 | 1.00th=[ 1860], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2343], 00:45:36.588 | 30.00th=[ 2442], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2671], 00:45:36.588 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 3163], 95.00th=[ 3458], 00:45:36.588 | 99.00th=[ 3884], 99.50th=[ 4015], 99.90th=[ 4293], 99.95th=[ 4424], 00:45:36.588 | 99.99th=[ 4424] 00:45:36.588 bw ( KiB/s): min=23680, max=24688, per=25.58%, avg=24224.11, stdev=428.56, samples=9 00:45:36.588 iops : min= 2960, max= 3086, avg=3028.00, stdev=53.55, samples=9 00:45:36.588 lat (usec) : 750=0.02%, 1000=0.02% 00:45:36.588 lat (msec) : 2=2.67%, 4=96.51%, 10=0.77% 00:45:36.588 cpu : usr=97.06%, sys=2.70%, ctx=7, majf=0, minf=88 00:45:36.588 IO depths : 1=0.1%, 2=0.3%, 4=71.2%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 issued rwts: total=15145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.588 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.588 filename1: (groupid=0, jobs=1): err= 0: pid=3532367: Tue Oct 1 16:03:15 2024 00:45:36.588 read: IOPS=2934, BW=22.9MiB/s (24.0MB/s)(115MiB/5003msec) 00:45:36.588 slat (nsec): min=5467, max=58637, avg=8497.14, stdev=2929.33 00:45:36.588 clat (usec): min=1063, max=4498, avg=2702.86, stdev=244.60 00:45:36.588 lat (usec): min=1078, max=4503, avg=2711.35, stdev=244.36 00:45:36.588 clat percentiles (usec): 00:45:36.588 | 1.00th=[ 2008], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:45:36.588 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:45:36.588 | 70.00th=[ 2704], 80.00th=[ 2835], 90.00th=[ 2933], 95.00th=[ 3064], 00:45:36.588 | 99.00th=[ 3556], 99.50th=[ 3818], 99.90th=[ 4228], 99.95th=[ 4293], 00:45:36.588 | 99.99th=[ 4490] 00:45:36.588 bw ( KiB/s): min=22992, max=24032, per=24.81%, avg=23493.33, stdev=311.79, samples=9 00:45:36.588 iops : min= 2874, max= 3004, avg=2936.67, stdev=38.97, samples=9 00:45:36.588 lat (msec) : 2=0.97%, 4=98.78%, 10=0.25% 00:45:36.588 cpu : usr=96.78%, sys=2.94%, ctx=6, majf=0, minf=31 00:45:36.588 IO depths : 1=0.1%, 2=0.1%, 4=72.2%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 issued rwts: total=14680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.588 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.588 filename1: (groupid=0, jobs=1): err= 0: pid=3532368: Tue Oct 1 16:03:15 2024 00:45:36.588 read: IOPS=2969, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:45:36.588 slat (nsec): min=7877, max=60880, avg=8948.54, stdev=2527.27 00:45:36.588 clat (usec): min=1181, max=4395, avg=2671.60, stdev=212.79 00:45:36.588 lat (usec): min=1189, max=4403, avg=2680.54, stdev=212.74 00:45:36.588 clat percentiles (usec): 00:45:36.588 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2540], 00:45:36.588 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:45:36.588 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2900], 95.00th=[ 2933], 00:45:36.588 | 99.00th=[ 3392], 99.50th=[ 3556], 99.90th=[ 4047], 99.95th=[ 4113], 00:45:36.588 | 99.99th=[ 4359] 00:45:36.588 bw ( KiB/s): min=23552, max=24032, per=25.07%, avg=23735.11, stdev=139.05, samples=9 00:45:36.588 iops : min= 2944, max= 3004, avg=2966.89, stdev=17.38, samples=9 00:45:36.588 lat (msec) : 2=0.99%, 4=98.86%, 10=0.15% 00:45:36.588 cpu : usr=96.74%, sys=3.00%, ctx=8, majf=0, minf=31 00:45:36.588 IO depths : 1=0.1%, 2=0.2%, 4=69.5%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.588 issued rwts: total=14851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.588 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.588 00:45:36.588 Run status group 0 (all jobs): 00:45:36.588 READ: bw=92.5MiB/s (97.0MB/s), 22.7MiB/s-23.7MiB/s (23.8MB/s-24.8MB/s), io=463MiB (485MB), run=5001-5003msec 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.588 00:45:36.588 real 0m24.463s 00:45:36.588 user 5m14.527s 00:45:36.588 sys 0m4.369s 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 ************************************ 00:45:36.588 END TEST fio_dif_rand_params 00:45:36.588 ************************************ 00:45:36.588 16:03:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:36.588 16:03:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:36.588 16:03:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 ************************************ 00:45:36.588 START TEST fio_dif_digest 00:45:36.588 ************************************ 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 bdev_null0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.588 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:36.588 [2024-10-01 16:03:15.787812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:36.589 { 00:45:36.589 "params": { 00:45:36.589 "name": "Nvme$subsystem", 00:45:36.589 "trtype": "$TEST_TRANSPORT", 00:45:36.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:36.589 "adrfam": "ipv4", 00:45:36.589 "trsvcid": "$NVMF_PORT", 00:45:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:36.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:36.589 "hdgst": ${hdgst:-false}, 00:45:36.589 "ddgst": ${ddgst:-false} 00:45:36.589 }, 00:45:36.589 "method": "bdev_nvme_attach_controller" 00:45:36.589 } 00:45:36.589 EOF 00:45:36.589 )") 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:36.589 "params": { 00:45:36.589 "name": "Nvme0", 00:45:36.589 "trtype": "tcp", 00:45:36.589 "traddr": "10.0.0.2", 00:45:36.589 "adrfam": "ipv4", 00:45:36.589 "trsvcid": "4420", 00:45:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:36.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:36.589 "hdgst": true, 00:45:36.589 "ddgst": true 00:45:36.589 }, 00:45:36.589 "method": "bdev_nvme_attach_controller" 00:45:36.589 }' 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:36.589 16:03:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:36.849 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:36.850 ... 00:45:36.850 fio-3.35 00:45:36.850 Starting 3 threads 00:45:49.088 00:45:49.088 filename0: (groupid=0, jobs=1): err= 0: pid=3533611: Tue Oct 1 16:03:26 2024 00:45:49.088 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(403MiB/10047msec) 00:45:49.088 slat (nsec): min=5800, max=40019, avg=6607.35, stdev=1514.59 00:45:49.088 clat (usec): min=6520, max=50862, avg=9338.87, stdev=1751.67 00:45:49.088 lat (usec): min=6527, max=50868, avg=9345.48, stdev=1751.65 00:45:49.088 clat percentiles (usec): 00:45:49.088 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:45:49.088 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:45:49.088 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10552], 00:45:49.088 | 99.00th=[11207], 99.50th=[11600], 99.90th=[48497], 99.95th=[50594], 00:45:49.088 | 99.99th=[51119] 00:45:49.088 bw ( KiB/s): min=37632, max=41984, per=35.47%, avg=41190.40, stdev=961.10, samples=20 00:45:49.088 iops : min= 294, max= 328, avg=321.80, stdev= 7.51, samples=20 00:45:49.088 lat (msec) : 10=84.88%, 20=14.97%, 50=0.09%, 100=0.06% 00:45:49.088 cpu : usr=94.38%, sys=5.42%, ctx=12, majf=0, minf=159 00:45:49.088 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:49.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.088 issued rwts: total=3220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.088 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:49.088 filename0: (groupid=0, jobs=1): err= 0: pid=3533612: Tue Oct 1 16:03:26 2024 00:45:49.088 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(372MiB/10046msec) 00:45:49.088 slat (nsec): min=5759, max=31935, avg=6537.76, stdev=1172.78 00:45:49.088 clat (usec): min=5693, max=51676, avg=10094.57, stdev=1330.94 00:45:49.088 lat (usec): min=5700, max=51682, avg=10101.11, stdev=1330.94 00:45:49.088 clat percentiles (usec): 00:45:49.088 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:45:49.088 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:45:49.088 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:45:49.088 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13566], 99.95th=[47973], 00:45:49.088 | 99.99th=[51643] 00:45:49.088 bw ( KiB/s): min=36608, max=40192, per=32.87%, avg=38167.00, stdev=776.70, samples=19 00:45:49.088 iops : min= 286, max= 314, avg=298.16, stdev= 6.09, samples=19 00:45:49.088 lat (msec) : 10=46.32%, 20=53.61%, 50=0.03%, 100=0.03% 00:45:49.088 cpu : usr=94.47%, sys=5.31%, ctx=12, majf=0, minf=123 00:45:49.088 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:49.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.088 issued rwts: total=2979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.088 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:49.088 filename0: (groupid=0, jobs=1): err= 0: pid=3533613: Tue Oct 1 16:03:26 2024 00:45:49.088 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(365MiB/10004msec) 00:45:49.088 slat (nsec): min=5709, max=39716, avg=6551.39, stdev=1349.01 00:45:49.088 clat (usec): min=7244, max=54233, avg=10283.48, stdev=2106.35 00:45:49.088 lat (usec): min=7251, max=54240, avg=10290.03, stdev=2106.36 00:45:49.088 clat percentiles (usec): 00:45:49.088 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:45:49.088 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:45:49.088 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:45:49.088 | 99.00th=[12256], 99.50th=[12518], 99.90th=[53216], 99.95th=[54264], 00:45:49.088 | 99.99th=[54264] 00:45:49.088 bw ( KiB/s): min=34304, max=39936, per=32.12%, avg=37299.00, stdev=1183.42, samples=19 00:45:49.088 iops : min= 268, max= 312, avg=291.37, stdev= 9.26, samples=19 00:45:49.088 lat (msec) : 10=40.95%, 20=58.85%, 100=0.21% 00:45:49.088 cpu : usr=94.88%, sys=4.90%, ctx=20, majf=0, minf=129 00:45:49.088 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:49.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.088 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.088 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:49.088 00:45:49.088 Run status group 0 (all jobs): 00:45:49.088 READ: bw=113MiB/s (119MB/s), 36.4MiB/s-40.1MiB/s (38.2MB/s-42.0MB/s), io=1139MiB (1195MB), run=10004-10047msec 00:45:49.088 16:03:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:49.088 16:03:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:49.088 16:03:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:49.088 16:03:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:49.088 16:03:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:49.089 00:45:49.089 real 0m11.042s 00:45:49.089 user 0m39.782s 00:45:49.089 sys 0m1.897s 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:49.089 16:03:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:49.089 ************************************ 00:45:49.089 END TEST fio_dif_digest 00:45:49.089 ************************************ 00:45:49.089 16:03:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:49.089 16:03:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:49.089 rmmod nvme_tcp 00:45:49.089 rmmod nvme_fabrics 00:45:49.089 rmmod nvme_keyring 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 3522962 ']' 00:45:49.089 16:03:26 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 3522962 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3522962 ']' 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3522962 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3522962 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3522962' 00:45:49.089 killing process with pid 3522962 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3522962 00:45:49.089 16:03:26 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3522962 00:45:49.089 16:03:27 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:45:49.089 16:03:27 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:51.002 Waiting for block devices as requested 00:45:51.263 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:51.263 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:51.263 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:51.523 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:51.523 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:51.523 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:51.523 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:51.783 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:51.783 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:45:52.044 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:52.044 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:52.044 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:52.305 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:52.305 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:52.305 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:52.566 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:52.566 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:52.828 16:03:32 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:52.828 16:03:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:52.828 16:03:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:55.370 16:03:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:55.370 00:45:55.370 real 1m18.048s 00:45:55.370 user 7m50.178s 00:45:55.370 sys 0m21.913s 00:45:55.370 16:03:34 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:55.370 16:03:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:55.370 ************************************ 00:45:55.370 END TEST nvmf_dif 00:45:55.370 ************************************ 00:45:55.370 16:03:34 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:55.370 16:03:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:55.370 16:03:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:55.370 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:45:55.370 ************************************ 00:45:55.370 START TEST nvmf_abort_qd_sizes 00:45:55.370 ************************************ 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:55.370 * Looking for test storage... 00:45:55.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:55.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.370 --rc genhtml_branch_coverage=1 00:45:55.370 --rc genhtml_function_coverage=1 00:45:55.370 --rc genhtml_legend=1 00:45:55.370 --rc geninfo_all_blocks=1 00:45:55.370 --rc geninfo_unexecuted_blocks=1 00:45:55.370 00:45:55.370 ' 00:45:55.370 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:55.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.370 --rc genhtml_branch_coverage=1 00:45:55.370 --rc genhtml_function_coverage=1 00:45:55.370 --rc genhtml_legend=1 00:45:55.370 --rc geninfo_all_blocks=1 00:45:55.370 --rc geninfo_unexecuted_blocks=1 00:45:55.370 00:45:55.370 ' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:55.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.371 --rc genhtml_branch_coverage=1 00:45:55.371 --rc genhtml_function_coverage=1 00:45:55.371 --rc genhtml_legend=1 00:45:55.371 --rc geninfo_all_blocks=1 00:45:55.371 --rc geninfo_unexecuted_blocks=1 00:45:55.371 00:45:55.371 ' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:55.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.371 --rc genhtml_branch_coverage=1 00:45:55.371 --rc genhtml_function_coverage=1 00:45:55.371 --rc genhtml_legend=1 00:45:55.371 --rc geninfo_all_blocks=1 00:45:55.371 --rc geninfo_unexecuted_blocks=1 00:45:55.371 00:45:55.371 ' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:55.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:45:55.371 16:03:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:03.562 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:03.562 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:03.562 Found net devices under 0000:31:00.0: cvl_0_0 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:03.562 Found net devices under 0000:31:00.1: cvl_0_1 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:46:03.562 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:03.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:03.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:46:03.563 00:46:03.563 --- 10.0.0.2 ping statistics --- 00:46:03.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:03.563 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:03.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:03.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:46:03.563 00:46:03.563 --- 10.0.0.1 ping statistics --- 00:46:03.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:03.563 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:46:03.563 16:03:41 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:06.112 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:06.112 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:06.372 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:06.372 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:06.633 16:03:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=3543160 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 3543160 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3543160 ']' 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:06.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:06.633 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:06.633 [2024-10-01 16:03:46.059281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:46:06.633 [2024-10-01 16:03:46.059334] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:06.896 [2024-10-01 16:03:46.095706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:06.896 [2024-10-01 16:03:46.141979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:06.896 [2024-10-01 16:03:46.175700] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:06.896 [2024-10-01 16:03:46.175736] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:06.896 [2024-10-01 16:03:46.175744] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:06.896 [2024-10-01 16:03:46.175751] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:06.896 [2024-10-01 16:03:46.175757] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:06.896 [2024-10-01 16:03:46.175914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:06.896 [2024-10-01 16:03:46.176047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:06.896 [2024-10-01 16:03:46.176261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:06.896 [2024-10-01 16:03:46.176263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:07.470 16:03:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:46:07.731 16:03:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:07.731 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:07.731 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:07.731 16:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:07.731 ************************************ 00:46:07.731 START TEST spdk_target_abort 00:46:07.731 ************************************ 00:46:07.731 16:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:46:07.731 16:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:07.731 16:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:46:07.731 16:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.731 16:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:07.993 spdk_targetn1 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:07.993 [2024-10-01 16:03:47.281397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:07.993 [2024-10-01 16:03:47.321782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:07.993 16:03:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:08.254 [2024-10-01 16:03:47.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:440 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:46:08.254 [2024-10-01 16:03:47.564470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0038 p:1 m:0 dnr:0 00:46:08.254 [2024-10-01 16:03:47.564760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:456 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:46:08.254 [2024-10-01 16:03:47.564777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003a p:1 m:0 dnr:0 00:46:08.254 [2024-10-01 16:03:47.565945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:512 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:46:08.254 [2024-10-01 16:03:47.565964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0042 p:1 m:0 dnr:0 00:46:08.254 [2024-10-01 16:03:47.572535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:664 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:46:08.254 [2024-10-01 16:03:47.572561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0056 p:1 m:0 dnr:0 00:46:08.254 [2024-10-01 16:03:47.596414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1352 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:46:08.254 [2024-10-01 16:03:47.596445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00aa p:1 m:0 dnr:0 00:46:08.254 [2024-10-01 16:03:47.620535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2024 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:46:08.254 [2024-10-01 16:03:47.620568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00fe p:1 m:0 dnr:0 00:46:08.254 [2024-10-01 16:03:47.636533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2488 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:46:08.254 [2024-10-01 16:03:47.636563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:46:11.557 Initializing NVMe Controllers 00:46:11.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:11.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:11.557 Initialization complete. Launching workers. 00:46:11.557 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11704, failed: 7 00:46:11.557 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2712, failed to submit 8999 00:46:11.557 success 739, unsuccessful 1973, failed 0 00:46:11.558 16:03:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:11.558 16:03:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:11.558 [2024-10-01 16:03:50.711511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:432 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:46:11.558 [2024-10-01 16:03:50.711549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0038 p:1 m:0 dnr:0 00:46:11.558 [2024-10-01 16:03:50.803052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:2384 len:8 PRP1 0x200007c60000 PRP2 0x0 00:46:11.558 [2024-10-01 16:03:50.803078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:11.558 [2024-10-01 16:03:50.827013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:2944 len:8 PRP1 0x200007c40000 PRP2 0x0 00:46:11.558 [2024-10-01 16:03:50.827036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:46:11.558 [2024-10-01 16:03:50.875058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:3968 len:8 PRP1 0x200007c40000 PRP2 0x0 00:46:11.558 [2024-10-01 16:03:50.875081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:00fc p:0 m:0 dnr:0 00:46:13.471 [2024-10-01 16:03:52.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:40344 len:8 PRP1 0x200007c58000 PRP2 0x0 00:46:13.471 [2024-10-01 16:03:52.464991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00b7 p:0 m:0 dnr:0 00:46:14.413 Initializing NVMe Controllers 00:46:14.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:14.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:14.413 Initialization complete. Launching workers. 00:46:14.413 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8527, failed: 5 00:46:14.413 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7299 00:46:14.413 success 358, unsuccessful 875, failed 0 00:46:14.413 16:03:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:14.413 16:03:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:14.673 [2024-10-01 16:03:54.021235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:168 nsid:1 lba:3616 len:8 PRP1 0x2000078ec000 PRP2 0x0 00:46:14.673 [2024-10-01 16:03:54.021260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:168 cdw0:0 sqhd:0090 p:0 m:0 dnr:0 00:46:17.974 Initializing NVMe Controllers 00:46:17.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:17.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:17.974 Initialization complete. Launching workers. 00:46:17.974 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43886, failed: 1 00:46:17.974 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2717, failed to submit 41170 00:46:17.974 success 607, unsuccessful 2110, failed 0 00:46:17.974 16:03:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:17.974 16:03:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:17.974 16:03:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:17.974 16:03:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:17.974 16:03:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:17.974 16:03:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:17.974 16:03:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3543160 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3543160 ']' 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3543160 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3543160 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3543160' 00:46:19.888 killing process with pid 3543160 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3543160 00:46:19.888 16:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3543160 00:46:19.888 00:46:19.888 real 0m12.086s 00:46:19.888 user 0m49.440s 00:46:19.888 sys 0m1.886s 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:19.888 ************************************ 00:46:19.888 END TEST spdk_target_abort 00:46:19.888 ************************************ 00:46:19.888 16:03:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:19.888 16:03:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:19.888 16:03:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:19.888 16:03:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:19.888 ************************************ 00:46:19.888 START TEST kernel_target_abort 00:46:19.888 ************************************ 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:19.888 16:03:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:23.190 Waiting for block devices as requested 00:46:23.190 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:23.451 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:23.451 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:23.451 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:23.711 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:23.711 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:23.711 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:23.973 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:23.973 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:24.233 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:24.233 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:24.233 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:24.494 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:24.494 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:24.494 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:24.756 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:24.756 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:25.017 No valid GPT data, bailing 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:46:25.017 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:46:25.018 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:46:25.018 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:25.018 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:46:25.279 00:46:25.279 Discovery Log Number of Records 2, Generation counter 2 00:46:25.279 =====Discovery Log Entry 0====== 00:46:25.279 trtype: tcp 00:46:25.279 adrfam: ipv4 00:46:25.279 subtype: current discovery subsystem 00:46:25.279 treq: not specified, sq flow control disable supported 00:46:25.279 portid: 1 00:46:25.279 trsvcid: 4420 00:46:25.279 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:25.279 traddr: 10.0.0.1 00:46:25.279 eflags: none 00:46:25.279 sectype: none 00:46:25.279 =====Discovery Log Entry 1====== 00:46:25.279 trtype: tcp 00:46:25.279 adrfam: ipv4 00:46:25.279 subtype: nvme subsystem 00:46:25.279 treq: not specified, sq flow control disable supported 00:46:25.279 portid: 1 00:46:25.279 trsvcid: 4420 00:46:25.279 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:25.279 traddr: 10.0.0.1 00:46:25.279 eflags: none 00:46:25.279 sectype: none 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:25.279 16:04:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:28.579 Initializing NVMe Controllers 00:46:28.579 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:28.579 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:28.579 Initialization complete. Launching workers. 00:46:28.579 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67620, failed: 0 00:46:28.579 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67620, failed to submit 0 00:46:28.579 success 0, unsuccessful 67620, failed 0 00:46:28.579 16:04:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:28.579 16:04:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:31.875 Initializing NVMe Controllers 00:46:31.875 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:31.875 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:31.875 Initialization complete. Launching workers. 00:46:31.875 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120019, failed: 0 00:46:31.875 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30194, failed to submit 89825 00:46:31.875 success 0, unsuccessful 30194, failed 0 00:46:31.875 16:04:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:31.875 16:04:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:34.420 Initializing NVMe Controllers 00:46:34.420 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:34.420 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:34.420 Initialization complete. Launching workers. 00:46:34.420 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146386, failed: 0 00:46:34.420 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36654, failed to submit 109732 00:46:34.420 success 0, unsuccessful 36654, failed 0 00:46:34.420 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:34.420 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:34.420 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:46:34.420 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:34.420 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:34.681 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:34.681 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:34.681 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:46:34.681 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:46:34.681 16:04:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:37.981 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:37.981 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:37.981 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:37.981 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:37.981 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:37.981 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:37.981 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:37.981 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:38.242 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:40.158 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:40.158 00:46:40.158 real 0m20.472s 00:46:40.158 user 0m10.128s 00:46:40.158 sys 0m6.018s 00:46:40.158 16:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:40.158 16:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:40.158 ************************************ 00:46:40.158 END TEST kernel_target_abort 00:46:40.158 ************************************ 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:40.420 rmmod nvme_tcp 00:46:40.420 rmmod nvme_fabrics 00:46:40.420 rmmod nvme_keyring 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 3543160 ']' 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 3543160 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3543160 ']' 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3543160 00:46:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3543160) - No such process 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3543160 is not found' 00:46:40.420 Process with pid 3543160 is not found 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:46:40.420 16:04:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:43.724 Waiting for block devices as requested 00:46:43.724 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:43.984 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:43.984 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:43.984 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:44.245 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:44.245 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:44.245 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:44.506 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:44.506 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:44.768 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:44.768 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:44.768 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:45.030 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:45.030 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:45.030 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:45.030 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:45.292 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:45.553 16:04:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:48.101 16:04:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:48.101 00:46:48.101 real 0m52.607s 00:46:48.101 user 1m5.049s 00:46:48.101 sys 0m19.069s 00:46:48.101 16:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:48.101 16:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:48.101 ************************************ 00:46:48.101 END TEST nvmf_abort_qd_sizes 00:46:48.101 ************************************ 00:46:48.101 16:04:26 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:48.101 16:04:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:48.101 16:04:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:48.101 16:04:26 -- common/autotest_common.sh@10 -- # set +x 00:46:48.101 ************************************ 00:46:48.101 START TEST keyring_file 00:46:48.101 ************************************ 00:46:48.101 16:04:27 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:48.101 * Looking for test storage... 00:46:48.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:48.101 16:04:27 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:48.101 16:04:27 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:46:48.101 16:04:27 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:48.101 16:04:27 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:48.101 16:04:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:48.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:48.102 --rc genhtml_branch_coverage=1 00:46:48.102 --rc genhtml_function_coverage=1 00:46:48.102 --rc genhtml_legend=1 00:46:48.102 --rc geninfo_all_blocks=1 00:46:48.102 --rc geninfo_unexecuted_blocks=1 00:46:48.102 00:46:48.102 ' 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:48.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:48.102 --rc genhtml_branch_coverage=1 00:46:48.102 --rc genhtml_function_coverage=1 00:46:48.102 --rc genhtml_legend=1 00:46:48.102 --rc geninfo_all_blocks=1 00:46:48.102 --rc geninfo_unexecuted_blocks=1 00:46:48.102 00:46:48.102 ' 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:48.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:48.102 --rc genhtml_branch_coverage=1 00:46:48.102 --rc genhtml_function_coverage=1 00:46:48.102 --rc genhtml_legend=1 00:46:48.102 --rc geninfo_all_blocks=1 00:46:48.102 --rc geninfo_unexecuted_blocks=1 00:46:48.102 00:46:48.102 ' 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:48.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:48.102 --rc genhtml_branch_coverage=1 00:46:48.102 --rc genhtml_function_coverage=1 00:46:48.102 --rc genhtml_legend=1 00:46:48.102 --rc geninfo_all_blocks=1 00:46:48.102 --rc geninfo_unexecuted_blocks=1 00:46:48.102 00:46:48.102 ' 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:48.102 16:04:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:48.102 16:04:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:48.102 16:04:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:48.102 16:04:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:48.102 16:04:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:48.102 16:04:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:48.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9sEUoibBsX 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@729 -- # python - 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9sEUoibBsX 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9sEUoibBsX 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.9sEUoibBsX 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FgjAdCCdlG 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:46:48.102 16:04:27 keyring_file -- nvmf/common.sh@729 -- # python - 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FgjAdCCdlG 00:46:48.102 16:04:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FgjAdCCdlG 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FgjAdCCdlG 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=3553691 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3553691 00:46:48.102 16:04:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3553691 ']' 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:48.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:48.102 16:04:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:48.102 [2024-10-01 16:04:27.449931] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:46:48.103 [2024-10-01 16:04:27.450002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553691 ] 00:46:48.103 [2024-10-01 16:04:27.481917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:48.103 [2024-10-01 16:04:27.505042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:48.103 [2024-10-01 16:04:27.534682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:48.365 16:04:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:48.365 [2024-10-01 16:04:27.713545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:48.365 null0 00:46:48.365 [2024-10-01 16:04:27.745597] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:48.365 [2024-10-01 16:04:27.745964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.365 16:04:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:48.365 [2024-10-01 16:04:27.777663] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:48.365 request: 00:46:48.365 { 00:46:48.365 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:48.365 "secure_channel": false, 00:46:48.365 "listen_address": { 00:46:48.365 "trtype": "tcp", 00:46:48.365 "traddr": "127.0.0.1", 00:46:48.365 "trsvcid": "4420" 00:46:48.365 }, 00:46:48.365 "method": "nvmf_subsystem_add_listener", 00:46:48.365 "req_id": 1 00:46:48.365 } 00:46:48.365 Got JSON-RPC error response 00:46:48.365 response: 00:46:48.365 { 00:46:48.365 "code": -32602, 00:46:48.365 "message": "Invalid parameters" 00:46:48.365 } 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:48.365 16:04:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=3553742 00:46:48.365 16:04:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3553742 /var/tmp/bperf.sock 00:46:48.365 16:04:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3553742 ']' 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:48.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:48.365 16:04:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:48.626 [2024-10-01 16:04:27.843528] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:46:48.626 [2024-10-01 16:04:27.843576] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553742 ] 00:46:48.626 [2024-10-01 16:04:27.873143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:48.626 [2024-10-01 16:04:27.922924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:48.626 [2024-10-01 16:04:27.954192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:49.197 16:04:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:49.197 16:04:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:49.197 16:04:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:49.197 16:04:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:49.458 16:04:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FgjAdCCdlG 00:46:49.458 16:04:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FgjAdCCdlG 00:46:49.719 16:04:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:49.719 16:04:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:49.719 16:04:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.719 16:04:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:49.719 16:04:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.719 16:04:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.9sEUoibBsX == \/\t\m\p\/\t\m\p\.\9\s\E\U\o\i\b\B\s\X ]] 00:46:49.719 16:04:29 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:49.719 16:04:29 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:49.719 16:04:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.719 16:04:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.719 16:04:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:49.979 16:04:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.FgjAdCCdlG == \/\t\m\p\/\t\m\p\.\F\g\j\A\d\C\C\d\l\G ]] 00:46:49.979 16:04:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:49.979 16:04:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:49.979 16:04:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:49.979 16:04:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.979 16:04:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:49.979 16:04:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.241 16:04:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:50.241 16:04:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:50.241 16:04:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:50.241 16:04:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:50.241 16:04:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:50.241 16:04:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:50.241 16:04:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.241 16:04:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:50.241 16:04:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:50.241 16:04:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:50.501 [2024-10-01 16:04:29.810016] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:50.501 nvme0n1 00:46:50.501 16:04:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:50.501 16:04:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:50.501 16:04:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:50.501 16:04:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:50.501 16:04:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:50.501 16:04:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.761 16:04:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:50.761 16:04:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:50.761 16:04:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:50.761 16:04:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:50.761 16:04:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:50.761 16:04:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.761 16:04:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:51.021 16:04:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:51.021 16:04:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:51.021 Running I/O for 1 seconds... 00:46:51.962 19580.00 IOPS, 76.48 MiB/s 00:46:51.962 Latency(us) 00:46:51.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.962 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:51.962 nvme0n1 : 1.00 19629.42 76.68 0.00 0.00 6509.26 4014.08 17257.81 00:46:51.962 =================================================================================================================== 00:46:51.962 Total : 19629.42 76.68 0.00 0.00 6509.26 4014.08 17257.81 00:46:51.962 { 00:46:51.962 "results": [ 00:46:51.962 { 00:46:51.962 "job": "nvme0n1", 00:46:51.962 "core_mask": "0x2", 00:46:51.962 "workload": "randrw", 00:46:51.962 "percentage": 50, 00:46:51.962 "status": "finished", 00:46:51.962 "queue_depth": 128, 00:46:51.962 "io_size": 4096, 00:46:51.962 "runtime": 1.004003, 00:46:51.962 "iops": 19629.423418057515, 00:46:51.962 "mibps": 76.67743522678717, 00:46:51.962 "io_failed": 0, 00:46:51.962 "io_timeout": 0, 00:46:51.962 "avg_latency_us": 6509.258534605236, 00:46:51.962 "min_latency_us": 4014.08, 00:46:51.962 "max_latency_us": 17257.81333333333 00:46:51.962 } 00:46:51.962 ], 00:46:51.962 "core_count": 1 00:46:51.962 } 00:46:51.962 16:04:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:51.962 16:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:52.222 16:04:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:52.222 16:04:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:52.222 16:04:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.222 16:04:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.222 16:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.222 16:04:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:52.483 16:04:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:52.483 16:04:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:52.483 16:04:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:52.483 16:04:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.483 16:04:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.483 16:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.483 16:04:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:52.743 16:04:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:52.743 16:04:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:52.743 16:04:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:52.743 16:04:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:52.743 16:04:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:52.743 16:04:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:52.743 16:04:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:52.743 16:04:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:52.743 16:04:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:52.744 16:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:52.744 [2024-10-01 16:04:32.129395] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:52.744 [2024-10-01 16:04:32.130188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a10d0 (107): Transport endpoint is not connected 00:46:52.744 [2024-10-01 16:04:32.131184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a10d0 (9): Bad file descriptor 00:46:52.744 [2024-10-01 16:04:32.132185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:52.744 [2024-10-01 16:04:32.132194] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:52.744 [2024-10-01 16:04:32.132199] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:52.744 [2024-10-01 16:04:32.132205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:52.744 request: 00:46:52.744 { 00:46:52.744 "name": "nvme0", 00:46:52.744 "trtype": "tcp", 00:46:52.744 "traddr": "127.0.0.1", 00:46:52.744 "adrfam": "ipv4", 00:46:52.744 "trsvcid": "4420", 00:46:52.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:52.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:52.744 "prchk_reftag": false, 00:46:52.744 "prchk_guard": false, 00:46:52.744 "hdgst": false, 00:46:52.744 "ddgst": false, 00:46:52.744 "psk": "key1", 00:46:52.744 "allow_unrecognized_csi": false, 00:46:52.744 "method": "bdev_nvme_attach_controller", 00:46:52.744 "req_id": 1 00:46:52.744 } 00:46:52.744 Got JSON-RPC error response 00:46:52.744 response: 00:46:52.744 { 00:46:52.744 "code": -5, 00:46:52.744 "message": "Input/output error" 00:46:52.744 } 00:46:52.744 16:04:32 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:52.744 16:04:32 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:52.744 16:04:32 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:52.744 16:04:32 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:52.744 16:04:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:52.744 16:04:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.744 16:04:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:52.744 16:04:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.744 16:04:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:52.744 16:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.004 16:04:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:53.004 16:04:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:53.004 16:04:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:53.004 16:04:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:53.004 16:04:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:53.004 16:04:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:53.004 16:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.265 16:04:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:53.265 16:04:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:53.265 16:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:53.265 16:04:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:53.265 16:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:53.526 16:04:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:53.526 16:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.526 16:04:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:53.823 16:04:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:53.823 16:04:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.9sEUoibBsX 00:46:53.823 16:04:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:53.823 16:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:53.823 [2024-10-01 16:04:33.197061] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9sEUoibBsX': 0100660 00:46:53.823 [2024-10-01 16:04:33.197082] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:53.823 request: 00:46:53.823 { 00:46:53.823 "name": "key0", 00:46:53.823 "path": "/tmp/tmp.9sEUoibBsX", 00:46:53.823 "method": "keyring_file_add_key", 00:46:53.823 "req_id": 1 00:46:53.823 } 00:46:53.823 Got JSON-RPC error response 00:46:53.823 response: 00:46:53.823 { 00:46:53.823 "code": -1, 00:46:53.823 "message": "Operation not permitted" 00:46:53.823 } 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:53.823 16:04:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:53.823 16:04:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.9sEUoibBsX 00:46:53.823 16:04:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:53.823 16:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9sEUoibBsX 00:46:54.165 16:04:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.9sEUoibBsX 00:46:54.165 16:04:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:54.165 16:04:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:54.165 16:04:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.165 16:04:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.165 16:04:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.165 16:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.165 16:04:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:54.165 16:04:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.165 16:04:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:54.165 16:04:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.165 16:04:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:54.165 16:04:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:54.165 16:04:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:54.165 16:04:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:54.165 16:04:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.165 16:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.437 [2024-10-01 16:04:33.734425] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.9sEUoibBsX': No such file or directory 00:46:54.437 [2024-10-01 16:04:33.734440] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:54.437 [2024-10-01 16:04:33.734454] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:54.437 [2024-10-01 16:04:33.734460] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:54.437 [2024-10-01 16:04:33.734466] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:54.437 [2024-10-01 16:04:33.734471] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:54.437 request: 00:46:54.437 { 00:46:54.437 "name": "nvme0", 00:46:54.437 "trtype": "tcp", 00:46:54.437 "traddr": "127.0.0.1", 00:46:54.437 "adrfam": "ipv4", 00:46:54.437 "trsvcid": "4420", 00:46:54.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:54.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:54.437 "prchk_reftag": false, 00:46:54.437 "prchk_guard": false, 00:46:54.437 "hdgst": false, 00:46:54.437 "ddgst": false, 00:46:54.437 "psk": "key0", 00:46:54.437 "allow_unrecognized_csi": false, 00:46:54.437 "method": "bdev_nvme_attach_controller", 00:46:54.437 "req_id": 1 00:46:54.437 } 00:46:54.437 Got JSON-RPC error response 00:46:54.437 response: 00:46:54.437 { 00:46:54.437 "code": -19, 00:46:54.437 "message": "No such device" 00:46:54.437 } 00:46:54.437 16:04:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:54.437 16:04:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:54.437 16:04:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:54.437 16:04:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:54.437 16:04:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:54.437 16:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:54.764 16:04:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.G2HOdjk3xC 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:54.764 16:04:33 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:54.764 16:04:33 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:46:54.764 16:04:33 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:54.764 16:04:33 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:46:54.764 16:04:33 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:46:54.764 16:04:33 keyring_file -- nvmf/common.sh@729 -- # python - 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.G2HOdjk3xC 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.G2HOdjk3xC 00:46:54.764 16:04:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.G2HOdjk3xC 00:46:54.764 16:04:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.G2HOdjk3xC 00:46:54.764 16:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.G2HOdjk3xC 00:46:54.764 16:04:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.764 16:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:55.053 nvme0n1 00:46:55.053 16:04:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:55.053 16:04:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:55.053 16:04:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:55.053 16:04:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:55.053 16:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.053 16:04:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:55.314 16:04:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:55.314 16:04:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:55.314 16:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:55.314 16:04:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:55.314 16:04:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:55.314 16:04:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:55.314 16:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.314 16:04:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:55.575 16:04:34 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:55.575 16:04:34 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:55.575 16:04:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:55.575 16:04:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:55.575 16:04:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:55.575 16:04:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:55.576 16:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.836 16:04:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:55.836 16:04:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:55.836 16:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:55.836 16:04:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:55.836 16:04:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:55.836 16:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.097 16:04:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:56.097 16:04:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.G2HOdjk3xC 00:46:56.097 16:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.G2HOdjk3xC 00:46:56.358 16:04:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FgjAdCCdlG 00:46:56.358 16:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FgjAdCCdlG 00:46:56.358 16:04:35 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:56.358 16:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:56.618 nvme0n1 00:46:56.618 16:04:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:56.618 16:04:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:56.879 16:04:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:56.879 "subsystems": [ 00:46:56.879 { 00:46:56.879 "subsystem": "keyring", 00:46:56.879 "config": [ 00:46:56.879 { 00:46:56.879 "method": "keyring_file_add_key", 00:46:56.879 "params": { 00:46:56.879 "name": "key0", 00:46:56.879 "path": "/tmp/tmp.G2HOdjk3xC" 00:46:56.879 } 00:46:56.879 }, 00:46:56.879 { 00:46:56.879 "method": "keyring_file_add_key", 00:46:56.879 "params": { 00:46:56.879 "name": "key1", 00:46:56.879 "path": "/tmp/tmp.FgjAdCCdlG" 00:46:56.879 } 00:46:56.879 } 00:46:56.879 ] 00:46:56.879 }, 00:46:56.879 { 00:46:56.879 "subsystem": "iobuf", 00:46:56.879 "config": [ 00:46:56.879 { 00:46:56.879 "method": "iobuf_set_options", 00:46:56.879 "params": { 00:46:56.879 "small_pool_count": 8192, 00:46:56.879 "large_pool_count": 1024, 00:46:56.879 "small_bufsize": 8192, 00:46:56.880 "large_bufsize": 135168 00:46:56.880 } 00:46:56.880 } 00:46:56.880 ] 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "subsystem": "sock", 00:46:56.880 "config": [ 00:46:56.880 { 00:46:56.880 "method": "sock_set_default_impl", 00:46:56.880 "params": { 00:46:56.880 "impl_name": "posix" 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "sock_impl_set_options", 00:46:56.880 "params": { 00:46:56.880 "impl_name": "ssl", 00:46:56.880 "recv_buf_size": 4096, 00:46:56.880 "send_buf_size": 4096, 00:46:56.880 "enable_recv_pipe": true, 00:46:56.880 "enable_quickack": false, 00:46:56.880 "enable_placement_id": 0, 00:46:56.880 "enable_zerocopy_send_server": true, 00:46:56.880 "enable_zerocopy_send_client": false, 00:46:56.880 "zerocopy_threshold": 0, 00:46:56.880 "tls_version": 0, 00:46:56.880 "enable_ktls": false 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "sock_impl_set_options", 00:46:56.880 "params": { 00:46:56.880 "impl_name": "posix", 00:46:56.880 "recv_buf_size": 2097152, 00:46:56.880 "send_buf_size": 2097152, 00:46:56.880 "enable_recv_pipe": true, 00:46:56.880 "enable_quickack": false, 00:46:56.880 "enable_placement_id": 0, 00:46:56.880 "enable_zerocopy_send_server": true, 00:46:56.880 "enable_zerocopy_send_client": false, 00:46:56.880 "zerocopy_threshold": 0, 00:46:56.880 "tls_version": 0, 00:46:56.880 "enable_ktls": false 00:46:56.880 } 00:46:56.880 } 00:46:56.880 ] 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "subsystem": "vmd", 00:46:56.880 "config": [] 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "subsystem": "accel", 00:46:56.880 "config": [ 00:46:56.880 { 00:46:56.880 "method": "accel_set_options", 00:46:56.880 "params": { 00:46:56.880 "small_cache_size": 128, 00:46:56.880 "large_cache_size": 16, 00:46:56.880 "task_count": 2048, 00:46:56.880 "sequence_count": 2048, 00:46:56.880 "buf_count": 2048 00:46:56.880 } 00:46:56.880 } 00:46:56.880 ] 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "subsystem": "bdev", 00:46:56.880 "config": [ 00:46:56.880 { 00:46:56.880 "method": "bdev_set_options", 00:46:56.880 "params": { 00:46:56.880 "bdev_io_pool_size": 65535, 00:46:56.880 "bdev_io_cache_size": 256, 00:46:56.880 "bdev_auto_examine": true, 00:46:56.880 "iobuf_small_cache_size": 128, 00:46:56.880 "iobuf_large_cache_size": 16 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "bdev_raid_set_options", 00:46:56.880 "params": { 00:46:56.880 "process_window_size_kb": 1024, 00:46:56.880 "process_max_bandwidth_mb_sec": 0 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "bdev_iscsi_set_options", 00:46:56.880 "params": { 00:46:56.880 "timeout_sec": 30 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "bdev_nvme_set_options", 00:46:56.880 "params": { 00:46:56.880 "action_on_timeout": "none", 00:46:56.880 "timeout_us": 0, 00:46:56.880 "timeout_admin_us": 0, 00:46:56.880 "keep_alive_timeout_ms": 10000, 00:46:56.880 "arbitration_burst": 0, 00:46:56.880 "low_priority_weight": 0, 00:46:56.880 "medium_priority_weight": 0, 00:46:56.880 "high_priority_weight": 0, 00:46:56.880 "nvme_adminq_poll_period_us": 10000, 00:46:56.880 "nvme_ioq_poll_period_us": 0, 00:46:56.880 "io_queue_requests": 512, 00:46:56.880 "delay_cmd_submit": true, 00:46:56.880 "transport_retry_count": 4, 00:46:56.880 "bdev_retry_count": 3, 00:46:56.880 "transport_ack_timeout": 0, 00:46:56.880 "ctrlr_loss_timeout_sec": 0, 00:46:56.880 "reconnect_delay_sec": 0, 00:46:56.880 "fast_io_fail_timeout_sec": 0, 00:46:56.880 "disable_auto_failback": false, 00:46:56.880 "generate_uuids": false, 00:46:56.880 "transport_tos": 0, 00:46:56.880 "nvme_error_stat": false, 00:46:56.880 "rdma_srq_size": 0, 00:46:56.880 "io_path_stat": false, 00:46:56.880 "allow_accel_sequence": false, 00:46:56.880 "rdma_max_cq_size": 0, 00:46:56.880 "rdma_cm_event_timeout_ms": 0, 00:46:56.880 "dhchap_digests": [ 00:46:56.880 "sha256", 00:46:56.880 "sha384", 00:46:56.880 "sha512" 00:46:56.880 ], 00:46:56.880 "dhchap_dhgroups": [ 00:46:56.880 "null", 00:46:56.880 "ffdhe2048", 00:46:56.880 "ffdhe3072", 00:46:56.880 "ffdhe4096", 00:46:56.880 "ffdhe6144", 00:46:56.880 "ffdhe8192" 00:46:56.880 ] 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "bdev_nvme_attach_controller", 00:46:56.880 "params": { 00:46:56.880 "name": "nvme0", 00:46:56.880 "trtype": "TCP", 00:46:56.880 "adrfam": "IPv4", 00:46:56.880 "traddr": "127.0.0.1", 00:46:56.880 "trsvcid": "4420", 00:46:56.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:56.880 "prchk_reftag": false, 00:46:56.880 "prchk_guard": false, 00:46:56.880 "ctrlr_loss_timeout_sec": 0, 00:46:56.880 "reconnect_delay_sec": 0, 00:46:56.880 "fast_io_fail_timeout_sec": 0, 00:46:56.880 "psk": "key0", 00:46:56.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:56.880 "hdgst": false, 00:46:56.880 "ddgst": false 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "bdev_nvme_set_hotplug", 00:46:56.880 "params": { 00:46:56.880 "period_us": 100000, 00:46:56.880 "enable": false 00:46:56.880 } 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "method": "bdev_wait_for_examine" 00:46:56.880 } 00:46:56.880 ] 00:46:56.880 }, 00:46:56.880 { 00:46:56.880 "subsystem": "nbd", 00:46:56.880 "config": [] 00:46:56.880 } 00:46:56.880 ] 00:46:56.880 }' 00:46:56.880 16:04:36 keyring_file -- keyring/file.sh@115 -- # killprocess 3553742 00:46:56.880 16:04:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3553742 ']' 00:46:56.880 16:04:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3553742 00:46:56.880 16:04:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:56.880 16:04:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:56.880 16:04:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3553742 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3553742' 00:46:57.143 killing process with pid 3553742 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@969 -- # kill 3553742 00:46:57.143 Received shutdown signal, test time was about 1.000000 seconds 00:46:57.143 00:46:57.143 Latency(us) 00:46:57.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:57.143 =================================================================================================================== 00:46:57.143 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@974 -- # wait 3553742 00:46:57.143 16:04:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=3555482 00:46:57.143 16:04:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3555482 /var/tmp/bperf.sock 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3555482 ']' 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:57.143 16:04:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:57.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:57.143 16:04:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:57.143 16:04:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:57.143 "subsystems": [ 00:46:57.143 { 00:46:57.143 "subsystem": "keyring", 00:46:57.143 "config": [ 00:46:57.143 { 00:46:57.143 "method": "keyring_file_add_key", 00:46:57.143 "params": { 00:46:57.143 "name": "key0", 00:46:57.143 "path": "/tmp/tmp.G2HOdjk3xC" 00:46:57.143 } 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "method": "keyring_file_add_key", 00:46:57.143 "params": { 00:46:57.143 "name": "key1", 00:46:57.143 "path": "/tmp/tmp.FgjAdCCdlG" 00:46:57.143 } 00:46:57.143 } 00:46:57.143 ] 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "subsystem": "iobuf", 00:46:57.143 "config": [ 00:46:57.143 { 00:46:57.143 "method": "iobuf_set_options", 00:46:57.143 "params": { 00:46:57.143 "small_pool_count": 8192, 00:46:57.143 "large_pool_count": 1024, 00:46:57.143 "small_bufsize": 8192, 00:46:57.143 "large_bufsize": 135168 00:46:57.143 } 00:46:57.143 } 00:46:57.143 ] 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "subsystem": "sock", 00:46:57.143 "config": [ 00:46:57.143 { 00:46:57.143 "method": "sock_set_default_impl", 00:46:57.143 "params": { 00:46:57.143 "impl_name": "posix" 00:46:57.143 } 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "method": "sock_impl_set_options", 00:46:57.143 "params": { 00:46:57.143 "impl_name": "ssl", 00:46:57.143 "recv_buf_size": 4096, 00:46:57.143 "send_buf_size": 4096, 00:46:57.143 "enable_recv_pipe": true, 00:46:57.143 "enable_quickack": false, 00:46:57.143 "enable_placement_id": 0, 00:46:57.143 "enable_zerocopy_send_server": true, 00:46:57.143 "enable_zerocopy_send_client": false, 00:46:57.143 "zerocopy_threshold": 0, 00:46:57.143 "tls_version": 0, 00:46:57.143 "enable_ktls": false 00:46:57.143 } 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "method": "sock_impl_set_options", 00:46:57.143 "params": { 00:46:57.143 "impl_name": "posix", 00:46:57.143 "recv_buf_size": 2097152, 00:46:57.143 "send_buf_size": 2097152, 00:46:57.143 "enable_recv_pipe": true, 00:46:57.143 "enable_quickack": false, 00:46:57.143 "enable_placement_id": 0, 00:46:57.143 "enable_zerocopy_send_server": true, 00:46:57.143 "enable_zerocopy_send_client": false, 00:46:57.143 "zerocopy_threshold": 0, 00:46:57.143 "tls_version": 0, 00:46:57.143 "enable_ktls": false 00:46:57.143 } 00:46:57.143 } 00:46:57.143 ] 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "subsystem": "vmd", 00:46:57.143 "config": [] 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "subsystem": "accel", 00:46:57.143 "config": [ 00:46:57.143 { 00:46:57.143 "method": "accel_set_options", 00:46:57.143 "params": { 00:46:57.143 "small_cache_size": 128, 00:46:57.143 "large_cache_size": 16, 00:46:57.143 "task_count": 2048, 00:46:57.143 "sequence_count": 2048, 00:46:57.143 "buf_count": 2048 00:46:57.143 } 00:46:57.143 } 00:46:57.143 ] 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "subsystem": "bdev", 00:46:57.143 "config": [ 00:46:57.143 { 00:46:57.143 "method": "bdev_set_options", 00:46:57.143 "params": { 00:46:57.143 "bdev_io_pool_size": 65535, 00:46:57.143 "bdev_io_cache_size": 256, 00:46:57.143 "bdev_auto_examine": true, 00:46:57.143 "iobuf_small_cache_size": 128, 00:46:57.143 "iobuf_large_cache_size": 16 00:46:57.143 } 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "method": "bdev_raid_set_options", 00:46:57.143 "params": { 00:46:57.143 "process_window_size_kb": 1024, 00:46:57.143 "process_max_bandwidth_mb_sec": 0 00:46:57.143 } 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "method": "bdev_iscsi_set_options", 00:46:57.143 "params": { 00:46:57.143 "timeout_sec": 30 00:46:57.143 } 00:46:57.143 }, 00:46:57.143 { 00:46:57.143 "method": "bdev_nvme_set_options", 00:46:57.143 "params": { 00:46:57.143 "action_on_timeout": "none", 00:46:57.143 "timeout_us": 0, 00:46:57.143 "timeout_admin_us": 0, 00:46:57.143 "keep_alive_timeout_ms": 10000, 00:46:57.143 "arbitration_burst": 0, 00:46:57.143 "low_priority_weight": 0, 00:46:57.143 "medium_priority_weight": 0, 00:46:57.143 "high_priority_weight": 0, 00:46:57.143 "nvme_adminq_poll_period_us": 10000, 00:46:57.143 "nvme_ioq_poll_period_us": 0, 00:46:57.143 "io_queue_requests": 512, 00:46:57.143 "delay_cmd_submit": true, 00:46:57.143 "transport_retry_count": 4, 00:46:57.143 "bdev_retry_count": 3, 00:46:57.143 "transport_ack_timeout": 0, 00:46:57.143 "ctrlr_loss_timeout_sec": 0, 00:46:57.143 "reconnect_delay_sec": 0, 00:46:57.143 "fast_io_fail_timeout_sec": 0, 00:46:57.143 "disable_auto_failback": false, 00:46:57.143 "generate_uuids": false, 00:46:57.143 "transport_tos": 0, 00:46:57.143 "nvme_error_stat": false, 00:46:57.143 "rdma_srq_size": 0, 00:46:57.143 "io_path_stat": false, 00:46:57.143 "allow_accel_sequence": false, 00:46:57.143 "rdma_max_cq_size": 0, 00:46:57.143 "rdma_cm_event_timeout_ms": 0, 00:46:57.143 "dhchap_digests": [ 00:46:57.143 "sha256", 00:46:57.143 "sha384", 00:46:57.143 "sha512" 00:46:57.143 ], 00:46:57.143 "dhchap_dhgroups": [ 00:46:57.143 "null", 00:46:57.143 "ffdhe2048", 00:46:57.144 "ffdhe3072", 00:46:57.144 "ffdhe4096", 00:46:57.144 "ffdhe6144", 00:46:57.144 "ffdhe8192" 00:46:57.144 ] 00:46:57.144 } 00:46:57.144 }, 00:46:57.144 { 00:46:57.144 "method": "bdev_nvme_attach_controller", 00:46:57.144 "params": { 00:46:57.144 "name": "nvme0", 00:46:57.144 "trtype": "TCP", 00:46:57.144 "adrfam": "IPv4", 00:46:57.144 "traddr": "127.0.0.1", 00:46:57.144 "trsvcid": "4420", 00:46:57.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:57.144 "prchk_reftag": false, 00:46:57.144 "prchk_guard": false, 00:46:57.144 "ctrlr_loss_timeout_sec": 0, 00:46:57.144 "reconnect_delay_sec": 0, 00:46:57.144 "fast_io_fail_timeout_sec": 0, 00:46:57.144 "psk": "key0", 00:46:57.144 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:57.144 "hdgst": false, 00:46:57.144 "ddgst": false 00:46:57.144 } 00:46:57.144 }, 00:46:57.144 { 00:46:57.144 "method": "bdev_nvme_set_hotplug", 00:46:57.144 "params": { 00:46:57.144 "period_us": 100000, 00:46:57.144 "enable": false 00:46:57.144 } 00:46:57.144 }, 00:46:57.144 { 00:46:57.144 "method": "bdev_wait_for_examine" 00:46:57.144 } 00:46:57.144 ] 00:46:57.144 }, 00:46:57.144 { 00:46:57.144 "subsystem": "nbd", 00:46:57.144 "config": [] 00:46:57.144 } 00:46:57.144 ] 00:46:57.144 }' 00:46:57.144 [2024-10-01 16:04:36.512421] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:46:57.144 [2024-10-01 16:04:36.512480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555482 ] 00:46:57.144 [2024-10-01 16:04:36.542162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:57.144 [2024-10-01 16:04:36.590418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:57.406 [2024-10-01 16:04:36.618829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:57.406 [2024-10-01 16:04:36.756351] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:57.977 16:04:37 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:57.977 16:04:37 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:57.977 16:04:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:57.977 16:04:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:57.977 16:04:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.238 16:04:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:58.238 16:04:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:58.238 16:04:37 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:58.238 16:04:37 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:58.238 16:04:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.499 16:04:37 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:58.499 16:04:37 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:58.499 16:04:37 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:58.499 16:04:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:58.761 16:04:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:58.761 16:04:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:58.761 16:04:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.G2HOdjk3xC /tmp/tmp.FgjAdCCdlG 00:46:58.761 16:04:38 keyring_file -- keyring/file.sh@20 -- # killprocess 3555482 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3555482 ']' 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3555482 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3555482 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3555482' 00:46:58.761 killing process with pid 3555482 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@969 -- # kill 3555482 00:46:58.761 Received shutdown signal, test time was about 1.000000 seconds 00:46:58.761 00:46:58.761 Latency(us) 00:46:58.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:58.761 =================================================================================================================== 00:46:58.761 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@974 -- # wait 3555482 00:46:58.761 16:04:38 keyring_file -- keyring/file.sh@21 -- # killprocess 3553691 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3553691 ']' 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3553691 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:58.761 16:04:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3553691 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3553691' 00:46:59.022 killing process with pid 3553691 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@969 -- # kill 3553691 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@974 -- # wait 3553691 00:46:59.022 00:46:59.022 real 0m11.440s 00:46:59.022 user 0m28.330s 00:46:59.022 sys 0m2.593s 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:59.022 16:04:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:59.022 ************************************ 00:46:59.022 END TEST keyring_file 00:46:59.022 ************************************ 00:46:59.283 16:04:38 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:46:59.283 16:04:38 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:59.283 16:04:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:46:59.283 16:04:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:59.283 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:46:59.283 ************************************ 00:46:59.283 START TEST keyring_linux 00:46:59.283 ************************************ 00:46:59.283 16:04:38 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:59.283 Joined session keyring: 566997605 00:46:59.283 * Looking for test storage... 00:46:59.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:59.283 16:04:38 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:59.283 16:04:38 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:46:59.283 16:04:38 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:59.283 16:04:38 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:59.283 16:04:38 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:59.283 16:04:38 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@345 -- # : 1 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@368 -- # return 0 00:46:59.545 16:04:38 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:59.545 16:04:38 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:59.545 --rc genhtml_branch_coverage=1 00:46:59.545 --rc genhtml_function_coverage=1 00:46:59.545 --rc genhtml_legend=1 00:46:59.545 --rc geninfo_all_blocks=1 00:46:59.545 --rc geninfo_unexecuted_blocks=1 00:46:59.545 00:46:59.545 ' 00:46:59.545 16:04:38 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:59.545 --rc genhtml_branch_coverage=1 00:46:59.545 --rc genhtml_function_coverage=1 00:46:59.545 --rc genhtml_legend=1 00:46:59.545 --rc geninfo_all_blocks=1 00:46:59.545 --rc geninfo_unexecuted_blocks=1 00:46:59.545 00:46:59.545 ' 00:46:59.545 16:04:38 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:59.545 --rc genhtml_branch_coverage=1 00:46:59.545 --rc genhtml_function_coverage=1 00:46:59.545 --rc genhtml_legend=1 00:46:59.545 --rc geninfo_all_blocks=1 00:46:59.545 --rc geninfo_unexecuted_blocks=1 00:46:59.545 00:46:59.545 ' 00:46:59.545 16:04:38 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:59.545 --rc genhtml_branch_coverage=1 00:46:59.545 --rc genhtml_function_coverage=1 00:46:59.545 --rc genhtml_legend=1 00:46:59.545 --rc geninfo_all_blocks=1 00:46:59.545 --rc geninfo_unexecuted_blocks=1 00:46:59.545 00:46:59.545 ' 00:46:59.545 16:04:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:59.545 16:04:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:59.545 16:04:38 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:59.545 16:04:38 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:59.546 16:04:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:59.546 16:04:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:59.546 16:04:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:59.546 16:04:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:59.546 16:04:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:59.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@729 -- # python - 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:59.546 /tmp/:spdk-test:key0 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:46:59.546 16:04:38 keyring_linux -- nvmf/common.sh@729 -- # python - 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:59.546 16:04:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:59.546 /tmp/:spdk-test:key1 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3555993 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3555993 00:46:59.546 16:04:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:59.546 16:04:38 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3555993 ']' 00:46:59.546 16:04:38 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:59.546 16:04:38 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:59.546 16:04:38 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:59.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:59.546 16:04:38 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:59.546 16:04:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:59.546 [2024-10-01 16:04:38.948303] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:46:59.546 [2024-10-01 16:04:38.948382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555993 ] 00:46:59.546 [2024-10-01 16:04:38.982369] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:59.805 [2024-10-01 16:04:39.031146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:59.805 [2024-10-01 16:04:39.064891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:00.371 16:04:39 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:00.371 16:04:39 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:47:00.372 16:04:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:00.372 [2024-10-01 16:04:39.737848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:00.372 null0 00:47:00.372 [2024-10-01 16:04:39.769813] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:00.372 [2024-10-01 16:04:39.770172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:00.372 16:04:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:00.372 39876110 00:47:00.372 16:04:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:00.372 413953519 00:47:00.372 16:04:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3556106 00:47:00.372 16:04:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3556106 /var/tmp/bperf.sock 00:47:00.372 16:04:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3556106 ']' 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:00.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:00.372 16:04:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:00.631 [2024-10-01 16:04:39.847018] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:47:00.631 [2024-10-01 16:04:39.847068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3556106 ] 00:47:00.631 [2024-10-01 16:04:39.876807] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:00.631 [2024-10-01 16:04:39.923298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:00.631 [2024-10-01 16:04:39.951948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:00.631 16:04:39 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:00.631 16:04:39 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:47:00.631 16:04:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:00.631 16:04:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:00.891 16:04:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:00.891 16:04:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:01.150 16:04:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:01.150 16:04:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:01.150 [2024-10-01 16:04:40.517239] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:01.150 nvme0n1 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:01.410 16:04:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:01.410 16:04:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:01.410 16:04:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:01.410 16:04:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:01.410 16:04:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:01.670 16:04:40 keyring_linux -- keyring/linux.sh@25 -- # sn=39876110 00:47:01.670 16:04:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:01.670 16:04:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:01.670 16:04:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 39876110 == \3\9\8\7\6\1\1\0 ]] 00:47:01.670 16:04:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 39876110 00:47:01.670 16:04:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:01.670 16:04:40 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:01.670 Running I/O for 1 seconds... 00:47:03.063 24535.00 IOPS, 95.84 MiB/s 00:47:03.063 Latency(us) 00:47:03.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:03.063 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:03.063 nvme0n1 : 1.01 24534.44 95.84 0.00 0.00 5201.66 2785.28 7154.35 00:47:03.063 =================================================================================================================== 00:47:03.063 Total : 24534.44 95.84 0.00 0.00 5201.66 2785.28 7154.35 00:47:03.063 { 00:47:03.063 "results": [ 00:47:03.063 { 00:47:03.063 "job": "nvme0n1", 00:47:03.063 "core_mask": "0x2", 00:47:03.063 "workload": "randread", 00:47:03.063 "status": "finished", 00:47:03.063 "queue_depth": 128, 00:47:03.063 "io_size": 4096, 00:47:03.063 "runtime": 1.00524, 00:47:03.063 "iops": 24534.439536827027, 00:47:03.063 "mibps": 95.83765444073057, 00:47:03.063 "io_failed": 0, 00:47:03.063 "io_timeout": 0, 00:47:03.063 "avg_latency_us": 5201.657595588533, 00:47:03.063 "min_latency_us": 2785.28, 00:47:03.063 "max_latency_us": 7154.346666666666 00:47:03.063 } 00:47:03.063 ], 00:47:03.063 "core_count": 1 00:47:03.063 } 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:03.063 16:04:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:03.063 16:04:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:03.063 16:04:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:03.063 16:04:42 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:47:03.063 16:04:42 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:03.063 16:04:42 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:03.063 16:04:42 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:03.063 16:04:42 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:03.063 16:04:42 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:03.063 16:04:42 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:03.063 16:04:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:03.322 [2024-10-01 16:04:42.657466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:03.322 [2024-10-01 16:04:42.658121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aae60 (107): Transport endpoint is not connected 00:47:03.322 [2024-10-01 16:04:42.659117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aae60 (9): Bad file descriptor 00:47:03.322 [2024-10-01 16:04:42.660118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:03.322 [2024-10-01 16:04:42.660132] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:03.322 [2024-10-01 16:04:42.660138] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:03.322 [2024-10-01 16:04:42.660144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:03.322 request: 00:47:03.322 { 00:47:03.322 "name": "nvme0", 00:47:03.322 "trtype": "tcp", 00:47:03.322 "traddr": "127.0.0.1", 00:47:03.322 "adrfam": "ipv4", 00:47:03.322 "trsvcid": "4420", 00:47:03.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:03.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:03.322 "prchk_reftag": false, 00:47:03.322 "prchk_guard": false, 00:47:03.322 "hdgst": false, 00:47:03.322 "ddgst": false, 00:47:03.322 "psk": ":spdk-test:key1", 00:47:03.322 "allow_unrecognized_csi": false, 00:47:03.322 "method": "bdev_nvme_attach_controller", 00:47:03.322 "req_id": 1 00:47:03.322 } 00:47:03.322 Got JSON-RPC error response 00:47:03.322 response: 00:47:03.322 { 00:47:03.322 "code": -5, 00:47:03.322 "message": "Input/output error" 00:47:03.322 } 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@33 -- # sn=39876110 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 39876110 00:47:03.322 1 links removed 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@33 -- # sn=413953519 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 413953519 00:47:03.322 1 links removed 00:47:03.322 16:04:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3556106 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3556106 ']' 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3556106 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3556106 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3556106' 00:47:03.322 killing process with pid 3556106 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@969 -- # kill 3556106 00:47:03.322 Received shutdown signal, test time was about 1.000000 seconds 00:47:03.322 00:47:03.322 Latency(us) 00:47:03.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:03.322 =================================================================================================================== 00:47:03.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:03.322 16:04:42 keyring_linux -- common/autotest_common.sh@974 -- # wait 3556106 00:47:03.582 16:04:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3555993 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3555993 ']' 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3555993 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3555993 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3555993' 00:47:03.582 killing process with pid 3555993 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@969 -- # kill 3555993 00:47:03.582 16:04:42 keyring_linux -- common/autotest_common.sh@974 -- # wait 3555993 00:47:03.841 00:47:03.841 real 0m4.578s 00:47:03.841 user 0m8.302s 00:47:03.841 sys 0m1.444s 00:47:03.841 16:04:43 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:03.841 16:04:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:03.841 ************************************ 00:47:03.841 END TEST keyring_linux 00:47:03.841 ************************************ 00:47:03.841 16:04:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:03.841 16:04:43 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:47:03.841 16:04:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:03.841 16:04:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:03.841 16:04:43 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:47:03.841 16:04:43 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:47:03.841 16:04:43 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:47:03.841 16:04:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:03.841 16:04:43 -- common/autotest_common.sh@10 -- # set +x 00:47:03.841 16:04:43 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:47:03.841 16:04:43 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:47:03.841 16:04:43 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:47:03.841 16:04:43 -- common/autotest_common.sh@10 -- # set +x 00:47:11.981 INFO: APP EXITING 00:47:11.981 INFO: killing all VMs 00:47:11.981 INFO: killing vhost app 00:47:11.981 INFO: EXIT DONE 00:47:15.284 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:15.284 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:15.284 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:19.491 Cleaning 00:47:19.491 Removing: /var/run/dpdk/spdk0/config 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:19.491 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:19.491 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:19.491 Removing: /var/run/dpdk/spdk1/config 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:19.491 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:19.491 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:19.491 Removing: /var/run/dpdk/spdk2/config 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:19.491 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:19.491 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:19.491 Removing: /var/run/dpdk/spdk3/config 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:19.491 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:19.491 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:19.491 Removing: /var/run/dpdk/spdk4/config 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:19.491 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:19.491 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:19.491 Removing: /dev/shm/bdev_svc_trace.1 00:47:19.491 Removing: /dev/shm/nvmf_trace.0 00:47:19.491 Removing: /dev/shm/spdk_tgt_trace.pid2878051 00:47:19.491 Removing: /var/run/dpdk/spdk0 00:47:19.491 Removing: /var/run/dpdk/spdk1 00:47:19.491 Removing: /var/run/dpdk/spdk2 00:47:19.491 Removing: /var/run/dpdk/spdk3 00:47:19.491 Removing: /var/run/dpdk/spdk4 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2876544 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2878051 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2878898 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2879941 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2880284 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2881353 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2881578 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2881820 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2882958 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2883671 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2884031 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2884372 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2884720 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2885053 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2885388 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2885739 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2886094 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2887193 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2890673 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2891000 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2891293 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2891532 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2891909 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2892235 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2892612 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2892706 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2893008 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2893324 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2893438 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2893701 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2894151 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2894500 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2894900 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2899542 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2904938 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2917633 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2918467 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2923763 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2924133 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2929569 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2936720 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2939819 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2952580 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2964247 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2966309 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2967593 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2988693 00:47:19.491 Removing: /var/run/dpdk/spdk_pid2993598 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3094292 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3100895 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3108598 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3115975 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3115978 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3116979 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3117982 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3118986 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3119661 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3119664 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3119996 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3120032 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3120162 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3121290 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3122294 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3123356 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3123976 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3124030 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3124309 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3125650 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3126885 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3136921 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3171451 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3177082 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3178945 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3181263 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3181601 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3181831 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3182000 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3182857 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3185033 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3186217 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3186771 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3189779 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3190476 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3191243 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3196323 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3203073 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3203074 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3203075 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3207834 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3212567 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3218476 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3262353 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3267064 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3274494 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3275922 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3277627 00:47:19.491 Removing: /var/run/dpdk/spdk_pid3279804 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3285552 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3290524 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3299745 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3299754 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3304935 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3305191 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3305528 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3305867 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3305950 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3307272 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3309229 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3311233 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3313230 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3315133 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3317014 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3324429 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3325219 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3326476 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3328096 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3334337 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3337362 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3343999 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3350509 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3360741 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3369164 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3369178 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3392813 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3393496 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3394188 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3394860 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3395918 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3396602 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3397285 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3397977 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3403160 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3403438 00:47:19.752 Removing: /var/run/dpdk/spdk_pid3410555 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3410920 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3417439 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3422533 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3434610 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3435378 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3440495 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3440890 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3445921 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3452760 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3455638 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3467701 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3478418 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3480672 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3481981 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3501542 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3506253 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3509434 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3517202 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3517263 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3523252 00:47:19.753 Removing: /var/run/dpdk/spdk_pid3525459 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3527852 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3529153 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3531927 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3533437 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3543494 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3544166 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3544759 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3547569 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3548144 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3548809 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3553691 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3553742 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3555482 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3555993 00:47:20.014 Removing: /var/run/dpdk/spdk_pid3556106 00:47:20.014 Clean 00:47:20.014 16:04:59 -- common/autotest_common.sh@1451 -- # return 0 00:47:20.014 16:04:59 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:47:20.014 16:04:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:20.014 16:04:59 -- common/autotest_common.sh@10 -- # set +x 00:47:20.014 16:04:59 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:47:20.014 16:04:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:20.014 16:04:59 -- common/autotest_common.sh@10 -- # set +x 00:47:20.014 16:04:59 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:20.014 16:04:59 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:20.014 16:04:59 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:20.014 16:04:59 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:47:20.014 16:04:59 -- spdk/autotest.sh@394 -- # hostname 00:47:20.014 16:04:59 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:20.275 geninfo: WARNING: invalid characters removed from testname! 00:47:46.863 16:05:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:48.248 16:05:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:50.796 16:05:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:52.710 16:05:32 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:54.622 16:05:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:56.007 16:05:35 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:57.922 16:05:36 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:57.922 16:05:37 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:47:57.922 16:05:37 -- common/autotest_common.sh@1681 -- $ lcov --version 00:47:57.922 16:05:37 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:47:57.922 16:05:37 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:47:57.922 16:05:37 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:47:57.922 16:05:37 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:47:57.922 16:05:37 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:47:57.922 16:05:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:47:57.922 16:05:37 -- scripts/common.sh@336 -- $ read -ra ver1 00:47:57.922 16:05:37 -- scripts/common.sh@337 -- $ IFS=.-: 00:47:57.922 16:05:37 -- scripts/common.sh@337 -- $ read -ra ver2 00:47:57.922 16:05:37 -- scripts/common.sh@338 -- $ local 'op=<' 00:47:57.922 16:05:37 -- scripts/common.sh@340 -- $ ver1_l=2 00:47:57.922 16:05:37 -- scripts/common.sh@341 -- $ ver2_l=1 00:47:57.922 16:05:37 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:47:57.922 16:05:37 -- scripts/common.sh@344 -- $ case "$op" in 00:47:57.922 16:05:37 -- scripts/common.sh@345 -- $ : 1 00:47:57.922 16:05:37 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:47:57.922 16:05:37 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:57.922 16:05:37 -- scripts/common.sh@365 -- $ decimal 1 00:47:57.922 16:05:37 -- scripts/common.sh@353 -- $ local d=1 00:47:57.922 16:05:37 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:47:57.922 16:05:37 -- scripts/common.sh@355 -- $ echo 1 00:47:57.922 16:05:37 -- scripts/common.sh@365 -- $ ver1[v]=1 00:47:57.922 16:05:37 -- scripts/common.sh@366 -- $ decimal 2 00:47:57.922 16:05:37 -- scripts/common.sh@353 -- $ local d=2 00:47:57.922 16:05:37 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:47:57.922 16:05:37 -- scripts/common.sh@355 -- $ echo 2 00:47:57.922 16:05:37 -- scripts/common.sh@366 -- $ ver2[v]=2 00:47:57.922 16:05:37 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:47:57.922 16:05:37 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:47:57.922 16:05:37 -- scripts/common.sh@368 -- $ return 0 00:47:57.923 16:05:37 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:57.923 16:05:37 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:47:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.923 --rc genhtml_branch_coverage=1 00:47:57.923 --rc genhtml_function_coverage=1 00:47:57.923 --rc genhtml_legend=1 00:47:57.923 --rc geninfo_all_blocks=1 00:47:57.923 --rc geninfo_unexecuted_blocks=1 00:47:57.923 00:47:57.923 ' 00:47:57.923 16:05:37 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:47:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.923 --rc genhtml_branch_coverage=1 00:47:57.923 --rc genhtml_function_coverage=1 00:47:57.923 --rc genhtml_legend=1 00:47:57.923 --rc geninfo_all_blocks=1 00:47:57.923 --rc geninfo_unexecuted_blocks=1 00:47:57.923 00:47:57.923 ' 00:47:57.923 16:05:37 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:47:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.923 --rc genhtml_branch_coverage=1 00:47:57.923 --rc genhtml_function_coverage=1 00:47:57.923 --rc genhtml_legend=1 00:47:57.923 --rc geninfo_all_blocks=1 00:47:57.923 --rc geninfo_unexecuted_blocks=1 00:47:57.923 00:47:57.923 ' 00:47:57.923 16:05:37 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:47:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.923 --rc genhtml_branch_coverage=1 00:47:57.923 --rc genhtml_function_coverage=1 00:47:57.923 --rc genhtml_legend=1 00:47:57.923 --rc geninfo_all_blocks=1 00:47:57.923 --rc geninfo_unexecuted_blocks=1 00:47:57.923 00:47:57.923 ' 00:47:57.923 16:05:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:57.923 16:05:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:47:57.923 16:05:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:57.923 16:05:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:57.923 16:05:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:57.923 16:05:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:57.923 16:05:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:57.923 16:05:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:57.923 16:05:37 -- paths/export.sh@5 -- $ export PATH 00:47:57.923 16:05:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:57.923 16:05:37 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:47:57.923 16:05:37 -- common/autobuild_common.sh@479 -- $ date +%s 00:47:57.923 16:05:37 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727791537.XXXXXX 00:47:57.923 16:05:37 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727791537.znvqCS 00:47:57.923 16:05:37 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:47:57.923 16:05:37 -- common/autobuild_common.sh@485 -- $ '[' -n main ']' 00:47:57.923 16:05:37 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:47:57.923 16:05:37 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:47:57.923 16:05:37 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:47:57.923 16:05:37 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:47:57.923 16:05:37 -- common/autobuild_common.sh@495 -- $ get_config_params 00:47:57.923 16:05:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:47:57.923 16:05:37 -- common/autotest_common.sh@10 -- $ set +x 00:47:57.923 16:05:37 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:47:57.923 16:05:37 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:47:57.923 16:05:37 -- pm/common@17 -- $ local monitor 00:47:57.923 16:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:57.923 16:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:57.923 16:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:57.923 16:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:57.923 16:05:37 -- pm/common@21 -- $ date +%s 00:47:57.923 16:05:37 -- pm/common@21 -- $ date +%s 00:47:57.923 16:05:37 -- pm/common@25 -- $ sleep 1 00:47:57.923 16:05:37 -- pm/common@21 -- $ date +%s 00:47:57.923 16:05:37 -- pm/common@21 -- $ date +%s 00:47:57.923 16:05:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727791537 00:47:57.923 16:05:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727791537 00:47:57.923 16:05:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727791537 00:47:57.923 16:05:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727791537 00:47:57.923 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727791537_collect-vmstat.pm.log 00:47:57.923 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727791537_collect-cpu-load.pm.log 00:47:57.923 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727791537_collect-cpu-temp.pm.log 00:47:57.923 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727791537_collect-bmc-pm.bmc.pm.log 00:47:58.867 16:05:38 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:47:58.867 16:05:38 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:47:58.867 16:05:38 -- spdk/autopackage.sh@14 -- $ timing_finish 00:47:58.867 16:05:38 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:58.867 16:05:38 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:58.867 16:05:38 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:58.867 16:05:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:58.867 16:05:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:58.867 16:05:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:58.867 16:05:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.867 16:05:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:47:58.867 16:05:38 -- pm/common@44 -- $ pid=3569982 00:47:58.867 16:05:38 -- pm/common@50 -- $ kill -TERM 3569982 00:47:58.867 16:05:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.867 16:05:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:47:58.867 16:05:38 -- pm/common@44 -- $ pid=3569983 00:47:58.867 16:05:38 -- pm/common@50 -- $ kill -TERM 3569983 00:47:58.867 16:05:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.867 16:05:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:47:58.867 16:05:38 -- pm/common@44 -- $ pid=3569985 00:47:58.867 16:05:38 -- pm/common@50 -- $ kill -TERM 3569985 00:47:58.867 16:05:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.867 16:05:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:47:58.867 16:05:38 -- pm/common@44 -- $ pid=3570010 00:47:58.867 16:05:38 -- pm/common@50 -- $ sudo -E kill -TERM 3570010 00:47:58.867 + [[ -n 2773711 ]] 00:47:58.867 + sudo kill 2773711 00:47:59.139 [Pipeline] } 00:47:59.155 [Pipeline] // stage 00:47:59.161 [Pipeline] } 00:47:59.175 [Pipeline] // timeout 00:47:59.181 [Pipeline] } 00:47:59.195 [Pipeline] // catchError 00:47:59.201 [Pipeline] } 00:47:59.215 [Pipeline] // wrap 00:47:59.221 [Pipeline] } 00:47:59.234 [Pipeline] // catchError 00:47:59.243 [Pipeline] stage 00:47:59.245 [Pipeline] { (Epilogue) 00:47:59.259 [Pipeline] catchError 00:47:59.261 [Pipeline] { 00:47:59.274 [Pipeline] echo 00:47:59.275 Cleanup processes 00:47:59.281 [Pipeline] sh 00:47:59.573 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:59.573 3570126 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:47:59.573 3570681 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:59.587 [Pipeline] sh 00:47:59.874 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:59.874 ++ grep -v 'sudo pgrep' 00:47:59.874 ++ awk '{print $1}' 00:47:59.874 + sudo kill -9 3570126 00:47:59.884 [Pipeline] sh 00:48:00.168 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:12.411 [Pipeline] sh 00:48:12.705 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:12.705 Artifacts sizes are good 00:48:12.721 [Pipeline] archiveArtifacts 00:48:12.728 Archiving artifacts 00:48:12.992 [Pipeline] sh 00:48:13.288 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:13.305 [Pipeline] cleanWs 00:48:13.316 [WS-CLEANUP] Deleting project workspace... 00:48:13.316 [WS-CLEANUP] Deferred wipeout is used... 00:48:13.324 [WS-CLEANUP] done 00:48:13.326 [Pipeline] } 00:48:13.343 [Pipeline] // catchError 00:48:13.357 [Pipeline] sh 00:48:13.670 + logger -p user.info -t JENKINS-CI 00:48:13.741 [Pipeline] } 00:48:13.756 [Pipeline] // stage 00:48:13.763 [Pipeline] } 00:48:13.779 [Pipeline] // node 00:48:13.785 [Pipeline] End of Pipeline 00:48:13.830 Finished: SUCCESS